Chapter 1. Cluster lifecycle with multicluster engine operator overview


The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. From the hub cluster, you can create and manage clusters, as well as destroy any clusters that you created. You can also hibernate, resume, and detach clusters.

The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for Red Hat OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. If you installed Red Hat Advanced Cluster Management, you do not need to install multicluster engine operator, as it is automatically installed.

Information:

1.1. Console overview

OpenShift Container Platform console plug-ins are available with the OpenShift Container Platform web console and can be integrated. To use this feature, the console plug-ins must remain enabled. The multicluster engine operator displays certain console features from Infrastructure and Credentials navigation items. If you install Red Hat Advanced Cluster Management, you see more console capability.

Note: With the plug-ins enabled, you can access Red Hat Advanced Cluster Management within the OpenShift Container Platform console from the cluster switcher by selecting All Clusters from the drop-down menu.

  1. To disable the plug-in, be sure you are in the Administrator perspective in the OpenShift Container Platform console.
  2. Find Administration in the navigation and click Cluster Settings, then click Configuration tab.
  3. From the list of Configuration resources, click the Console resource with the operator.openshift.io API group, which contains cluster-wide configuration for the web console.
  4. Click on the Console plug-ins tab. The mce plug-in is listed. Note: If Red Hat Advanced Cluster Management is installed, it is also listed as acm.
  5. Modify plug-in status from the table. In a few moments, you are prompted to refresh the console.

1.2. multicluster engine operator role-based access control

RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product:

1.2.1. Overview of roles

Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported:

1.2.1.1. Table of role definition

RoleDefinition

cluster-admin

This is an OpenShift Container Platform default role. A user with cluster binding to the cluster-admin role is an OpenShift Container Platform super user, who has all access.

open-cluster-management:cluster-manager-admin

A user with cluster binding to the open-cluster-management:cluster-manager-admin role is a super user, who has all access. This role allows the user to create a ManagedCluster resource.

open-cluster-management:admin:<managed_cluster_name>

A user with cluster binding to the open-cluster-management:admin:<managed_cluster_name> role has administrator access to the ManagedCluster resource named, <managed_cluster_name>. When a user has a managed cluster, this role is automatically created.

open-cluster-management:view:<managed_cluster_name>

A user with cluster binding to the open-cluster-management:view:<managed_cluster_name> role has view access to the ManagedCluster resource named, <managed_cluster_name>.

open-cluster-management:managedclusterset:admin:<managed_clusterset_name>

A user with cluster binding to the open-cluster-management:managedclusterset:admin:<managed_clusterset_name> role has administrator access to ManagedCluster resource named <managed_clusterset_name>. The user also has administrator access to managedcluster.cluster.open-cluster-management.io, clusterclaim.hive.openshift.io, clusterdeployment.hive.openshift.io, and clusterpool.hive.openshift.io resources, which has the managed cluster set labels: cluster.open-cluster-management.io and clusterset=<managed_clusterset_name>. A role binding is automatically generated when you are using a cluster set. See Creating a ManagedClusterSet to learn how to manage the resource.

open-cluster-management:managedclusterset:view:<managed_clusterset_name>

A user with cluster binding to the open-cluster-management:managedclusterset:view:<managed_clusterset_name> role has view access to the ManagedCluster resource named, <managed_clusterset_name>`. The user also has view access to managedcluster.cluster.open-cluster-management.io, clusterclaim.hive.openshift.io, clusterdeployment.hive.openshift.io, and clusterpool.hive.openshift.io resources, which has the managed cluster set labels: cluster.open-cluster-management.io, clusterset=<managed_clusterset_name>. For more details on how to manage managed cluster set resources, see Creating a ManagedClusterSet.

admin, edit, view

Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to open-cluster-management resources in a specific namespace, while cluster-wide binding to the same roles gives access to all of the open-cluster-management resources cluster-wide.

Important:

  • Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace.
  • If a user does not have role access to a cluster, the cluster name is not visible. The cluster name is displayed with the following symbol: -.

RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product.

1.2.2. Cluster lifecycle RBAC

View the following cluster lifecycle RBAC operations:

  • Create and administer cluster role bindings for all managed clusters. For example, create a cluster role binding to the cluster role open-cluster-management:cluster-manager-admin by entering the following command:

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin --user=<username>

    This role is a super user, which has access to all resources and actions. You can create cluster-scoped managedcluster resources, the namespace for the resources that manage the managed cluster, and the resources in the namespace with this role. You might need to add the username of the ID that requires the role association to avoid permission errors.

  • Run the following command to administer a cluster role binding for a managed cluster named cluster-name:

    oc create clusterrolebinding (role-binding-name) --clusterrole=open-cluster-management:admin:<cluster-name> --user=<username>

    This role has read and write access to the cluster-scoped managedcluster resource. This is needed because the managedcluster is a cluster-scoped resource and not a namespace-scoped resource.

    • Create a namespace role binding to the cluster role admin by entering the following command:

      oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=admin --user=<username>

      This role has read and write access to the resources in the namespace of the managed cluster.

  • Create a cluster role binding for the open-cluster-management:view:<cluster-name> cluster role to view a managed cluster named cluster-name Enter the following command:

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name> --user=<username>

    This role has read access to the cluster-scoped managedcluster resource. This is needed because the managedcluster is a cluster-scoped resource.

  • Create a namespace role binding to the cluster role view by entering the following command:

    oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=view --user=<username>

    This role has read-only access to the resources in the namespace of the managed cluster.

  • View a list of the managed clusters that you can access by entering the following command:

    oc get managedclusters.clusterview.open-cluster-management.io

    This command is used by administrators and users without cluster administrator privileges.

  • View a list of the managed cluster sets that you can access by entering the following command:

    oc get managedclustersets.clusterview.open-cluster-management.io

    This command is used by administrators and users without cluster administrator privileges.

1.2.2.1. Cluster pools RBAC

View the following cluster pool RBAC operations:

  • As a cluster administrator, use cluster pool provision clusters by creating a managed cluster set and grant administrator permission to roles by adding the role to the group. View the following examples:

    • Grant admin permission to the server-foundation-clusterset managed cluster set with the following command:

      oc adm policy add-cluster-role-to-group open-cluster-management:clusterset-admin:server-foundation-clusterset
      server-foundation-team-admin
    • Grant view permission to the server-foundation-clusterset managed cluster set with the following command:

      oc adm policy add-cluster-role-to-group open-cluster-management:clusterset-view:server-foundation-clusterset server-foundation-team-user
  • Create a namespace for the cluster pool, server-foundation-clusterpool. View the following examples to grant role permissions:

    • Grant admin permission to server-foundation-clusterpool for the server-foundation-team-admin by running the following commands:

      oc adm new-project server-foundation-clusterpool
      
      oc adm policy add-role-to-group admin server-foundation-team-admin --namespace  server-foundation-clusterpool
  • As a team administrator, create a cluster pool named ocp46-aws-clusterpool with a cluster set label, cluster.open-cluster-management.io/clusterset=server-foundation-clusterset in the cluster pool namespace:

    • The server-foundation-webhook checks if the cluster pool has the cluster set label, and if the user has permission to create cluster pools in the cluster set.
    • The server-foundation-controller grants view permission to the server-foundation-clusterpool namespace for server-foundation-team-user.
  • When a cluster pool is created, the cluster pool creates a clusterdeployment. Continue reading for more details:

    • The server-foundation-controller grants admin permission to the clusterdeployment namespace for server-foundation-team-admin.
    • The server-foundation-controller grants view permission clusterdeployment namespace for server-foundation-team-user.

      Note: As a team-admin and team-user, you have admin permission to the clusterpool, clusterdeployment, and clusterclaim.

1.2.2.2. Console and API RBAC table for cluster lifecycle

View the following console and API RBAC tables for cluster lifecycle:

Table 1.1. Console RBAC table for cluster lifecycle
ResourceAdminEditView

Clusters

read, update, delete

-

read

Cluster sets

get, update, bind, join

edit role not mentioned

get

Managed clusters

read, update, delete

no edit role mentioned

get

Provider connections

create, read, update, and delete

-

read

Table 1.2. API RBAC table for cluster lifecycle
APIAdminEditView

managedclusters.cluster.open-cluster-management.io

You can use mcl (singular) or mcls (plural) in commands for this API.

create, read, update, delete

read, update

read

managedclusters.view.open-cluster-management.io

You can use mcv (singular) or mcvs (plural) in commands for this API.

read

read

read

managedclusters.register.open-cluster-management.io/accept

update

update

 

managedclusterset.cluster.open-cluster-management.io

You can use mclset (singular) or mclsets (plural) in commands for this API.

create, read, update, delete

read, update

read

managedclustersets.view.open-cluster-management.io

read

read

read

managedclustersetbinding.cluster.open-cluster-management.io

You can use mclsetbinding (singular) or mclsetbindings (plural) in commands for this API.

create, read, update, delete

read, update

read

klusterletaddonconfigs.agent.open-cluster-management.io

create, read, update, delete

read, update

read

managedclusteractions.action.open-cluster-management.io

create, read, update, delete

read, update

read

managedclusterviews.view.open-cluster-management.io

create, read, update, delete

read, update

read

managedclusterinfos.internal.open-cluster-management.io

create, read, update, delete

read, update

read

manifestworks.work.open-cluster-management.io

create, read, update, delete

read, update

read

submarinerconfigs.submarineraddon.open-cluster-management.io

create, read, update, delete

read, update

read

placements.cluster.open-cluster-management.io

create, read, update, delete

read, update

read

1.2.2.3. Credentials role-based access control

The access to credentials is controlled by Kubernetes. Credentials are stored and secured as Kubernetes secrets. The following permissions apply to accessing secrets in Red Hat Advanced Cluster Management for Kubernetes:

  • Users with access to create secrets in a namespace can create credentials.
  • Users with access to read secrets in a namespace can also view credentials.
  • Users with the Kubernetes cluster roles of admin and edit can create and edit secrets.
  • Users with the Kubernetes cluster role of view cannot view secrets because reading the contents of secrets enables access to service account credentials.

1.3. Network configuration

Configure your network settings to allow the connections.

Important: The trusted CA bundle is available in the multicluster engine operator namespace, but that enhancement requires changes to your network. The trusted CA bundle ConfigMap uses the default name of trusted-ca-bundle. You can change this name by providing it to the operator in an environment variable named TRUSTED_CA_BUNDLE. See Configuring the cluster-wide proxy in the Networking section of Red Hat OpenShift Container Platform for more information.

Note: Registration Agent and Work Agent on the managed cluster do not support proxy settings because they communicate with apiserver on the hub cluster by establishing an mTLS connection, which cannot pass through the proxy.

For the multicluster engine operator cluster networking requirements, see the following table:

DirectionProtocolConnectionPort (if specified)

Outbound

 

Kubernetes API server of the provisioned managed cluster

6443

Outbound from the OpenShift Container Platform managed cluster to the hub cluster

TCP

Communication between the Ironic Python Agent and the bare metal operator on the hub cluster

6180, 6183, 6385, and 5050

Outbound from the hub cluster to the Ironic Python Agent on the managed cluster

TCP

Communication between the bare metal node where the Ironic Python Agent is running and the Ironic conductor service

9999

Outbound and inbound

 

The WorkManager service route on the managed cluster

443

Inbound

 

The Kubernetes API server of the multicluster engine for Kubernetes operator cluster from the managed cluster

6443

Note: The managed cluster must be able to reach the hub cluster control plane node IP addresses.

1.4. Release notes for Cluster lifecycle with multicluster engine operator

Learn about new features and enhancements, support, deprecations, removals, and Errata bug fixes.

Important: OpenShift Container Platform release notes are not documented in this product documentation. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes.

Deprecated: multicluster engine operator 2.2 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates.

Best practice: Upgrade to the most recent version.

  • The documentation references the earliest supported OpenShift Container Platform version, unless a specific component or function is introduced and tested only on a more recent version of OpenShift Container Platform.
  • For full support information, see the multicluster engine operator Support matrix. For lifecycle information, see Red Hat OpenShift Container Platform Life Cycle policy.
  • If you experience issues with one of the currently supported releases, or the product documentation, go to Red Hat Support where you can troubleshoot, view Knowledgebase articles, connect with the Support Team, or open a case. You must log in with your credentials.
  • You can also learn more about the Customer Portal documentation at Red Hat Customer Portal FAQ.

1.4.1. What’s new for Cluster lifecycle with multicluster engine operator

Learn about new features for creating, importing, managing, and destroying Kubernetes clusters across various infrastructure cloud providers, private clouds, and on-premises data centers.

For full support information, see the multicluster engine operator Support matrix. For lifecycle information, see Red Hat OpenShift Container Platform Life Cycle policy.

Important: Cluster management now supports all providers that are certified through the Cloud Native Computing Foundation (CNCF) Kubernetes Conformance Program. Choose a vendor that is recognized by CNFC for your hybrid cloud multicluster management.

See the following information about using CNFC providers:

1.4.1.1. New features and enhancements for components

Learn more about new features for specific components.

Note: Some features and components are identified and released as Technology Preview.

Important: The hosted control planes documentation is now located in the OpenShift Container Platform documentation. See the Hosted control planes overview in the OpenShift Container Platform documentation.

If you are using multicluster engine operator 2.6 and earlier, the hosted control planes documentation is located in the Red Hat Advanced Cluster Management product documentation. See Red Hat Advanced Cluster Management Hosted control planes.

1.4.1.2. Cluster management

Learn about new features and enhancements for Cluster lifecycle with multicluster engine operator.

1.4.2. Errata updates for Cluster lifecycle with multicluster engine operator

For multicluster engine operator, the Errata updates are automatically applied when released.

If no release notes are listed, the product does not have an Errata release at this time.

Important: For reference, Jira links and Jira numbers might be added to the content and used internally. Links that require access might not be available for the user.

1.4.3. Known issues and limitations for Cluster lifecycle with multicluster engine operator

Review the known issues and limitations for Cluster lifecycle with multicluster engine operator for this release, or known issues that continued from the previous release.

Cluster management known issues and limitations are part of the Cluster lifecycle with multicluster engine operator documentation. Known issues for {mce-short) integrated with Red Hat Advanced Cluster Management are documented in the Release notes for Red Hat Advanced Cluster Management.

Important: OpenShift Container Platform release notes are not documented in this product documentation. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes.

1.4.3.1. Installation

Learn about known issues and limitations during multicluster engine operator installation.

1.4.3.1.1. Status stuck when installing on OpenShift Service on AWS with hosted control plane cluster

Installation status might get stuck in the Installing state when you install multicluster engine operator on a OpenShift Service on AWS with hosted control planes cluster. The local-cluster might also remain in the Unknown state.

When you check the klusterlet-agent pod log in the open-cluster-management-agent namespace on your hub cluster, you see an error that resembles the following:

E0809 18:45:29.450874       1 reflector.go:147] k8s.io/client-go@v0.29.4/tools/cache/reflector.go:229: Failed to watch *v1.CertificateSigningRequest: failed to list *v1.CertificateSigningRequest: Get "https://api.xxx.openshiftapps.com:443/apis/certificates.k8s.io/v1/certificatesigningrequests?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate signed by unknown authority

To resolve the problem, configure the hub cluster API server verification strategy. Complete the following steps:

  1. Create a KlusterletConfig resource with name global if it does not exist.
  2. Set the spec.hubKubeAPIServerConfig.serverVerificationStrategy to UseSystemTruststore. See the following example:

    apiVersion: config.open-cluster-management.io/v1alpha1
    kind: KlusterletConfig
    metadata:
      name: global
    spec:
      hubKubeAPIServerConfig:
        serverVerificationStrategy: UseSystemTruststore
  3. Apply the resource by running the following command on the hub cluster. Replace <filename> with the name of your file:

    oc apply -f <filename>
  4. If the local-cluster state does not recover in one minute, export and decode the import.yaml file by running the following command on the hub cluster:

    oc get secret local-cluster-import -n local-cluster -o jsonpath={.data.import\.yaml} | base64 --decode > import.yaml
  5. Apply the file by running the following command on the hub cluster:

    oc apply -f import.yaml
1.4.3.1.2. installNamespace field can only have one value

When enabling the managed-serviceaccount add-on, the installNamespace field in the ManagedClusterAddOn resource must have open-cluster-management-agent-addon as the value. Other values are ignored. The managed-serviceaccount add-on agent is always deployed in the open-cluster-management-agent-addon namespace on the managed cluster.

1.4.3.2. Cluster

Learn about known issues and limitations for Cluster lifecycle with multicluster engine operator, such as issues with creating, discovering, importing, and removing clusters, and more cluster management issues for multicluster engine operator.

1.4.3.2.1. Limitation with nmstate

Develop quicker by configuring copy and paste features. To configure the copy-from-mac feature in the assisted-installer, you must add the mac-address to the nmstate definition interface and the mac-mapping interface. The mac-mapping interface is provided outside the nmstate definition interface. As a result, you must provide the same mac-address twice.

If you have a different version of VolSync installed, replace v0.6.0 with your installed version.

1.4.3.2.2. Deleting a managed cluster set does not automatically remove its label

After you delete a ManagedClusterSet, the label that is added to each managed cluster that associates the cluster to the cluster set is not automatically removed. Manually remove the label from each of the managed clusters that were included in the deleted managed cluster set. The label resembles the following example: cluster.open-cluster-management.io/clusterset:<ManagedClusterSet Name>.

1.4.3.2.3. ClusterClaim error

If you create a Hive ClusterClaim against a ClusterPool and manually set the ClusterClaimspec lifetime field to an invalid golang time value, the product stops fulfilling and reconciling all ClusterClaims, not just the malformed claim.

You see the following error in the clusterclaim-controller pod logs, which is a specific example with the PoolName and invalid lifetime included:

E0203 07:10:38.266841       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to watch *v1.ClusterClaim: failed to list *v1.ClusterClaim: v1.ClusterClaimList.Items: []v1.ClusterClaim: v1.ClusterClaim.v1.ClusterClaim.Spec: v1.ClusterClaimSpec.Lifetime: unmarshalerDecoder: time: unknown unit "w" in duration "1w", error found in #10 byte of ...|time":"1w"}},{"apiVe|..., bigger context ...|clusterPoolName":"policy-aas-hubs","lifetime":"1w"}},{"apiVersion":"hive.openshift.io/v1","kind":"Cl|...

You can delete the invalid claim.

If the malformed claim is deleted, claims begin successfully reconciling again without any further interaction.

1.4.3.2.4. The product channel out of sync with provisioned cluster

The clusterimageset is in fast channel, but the provisioned cluster is in stable channel. Currently the product does not sync the channel to the provisioned OpenShift Container Platform cluster.

Change to the right channel in the OpenShift Container Platform console. Click Administration > Cluster Settings > Details Channel.

1.4.3.2.5. Selecting a subnet is required when creating an on-premises cluster

When you create an on-premises cluster using the console, you must select an available subnet for your cluster. It is not marked as a required field.

1.4.3.2.6. Cluster provision with Ansible automation fails in proxy environment

An Automation template that is configured to automatically provision a managed cluster might fail when both of the following conditions are met:

  • The hub cluster has cluster-wide proxy enabled.
  • The Ansible Automation Platform can only be reached through the proxy.
1.4.3.2.7. Cannot delete managed cluster namespace manually

You cannot delete the namespace of a managed cluster manually. The managed cluster namespace is automatically deleted after the managed cluster is detached. If you delete the managed cluster namespace manually before the managed cluster is detached, the managed cluster shows a continuous terminating status after you delete the managed cluster. To delete this terminating managed cluster, manually remove the finalizers from the managed cluster that you detached.

1.4.3.2.8. Automatic secret updates for provisioned clusters is not supported

When you change your cloud provider access key on the cloud provider side, you also need to update the corresponding credential for this cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.

1.4.3.2.9. Process to destroy a cluster does not complete

When you destroy a managed cluster, the status continues to display Destroying after one hour, and the cluster is not destroyed. To resolve this issue complete the following steps:

  1. Manually ensure that there are no orphaned resources on your cloud, and that all of the provider resources that are associated with the managed cluster are cleaned up.
  2. Open the ClusterDeployment information for the managed cluster that is being removed by entering the following command:

    oc edit clusterdeployment/<mycluster> -n <namespace>

    Replace mycluster with the name of the managed cluster that you are destroying.

    Replace namespace with the namespace of the managed cluster.

  3. Remove the hive.openshift.io/deprovision finalizer to forcefully stop the process that is trying to clean up the cluster resources in the cloud.
  4. Save your changes and verify that ClusterDeployment is gone.
  5. Manually remove the namespace of the managed cluster by running the following command:

    oc delete ns <namespace>

    Replace namespace with the namespace of the managed cluster.

1.4.3.2.10. Cannot upgrade OpenShift Container Platform managed clusters on OpenShift Container Platform Dedicated with the console

You cannot use the Red Hat Advanced Cluster Management console to upgrade OpenShift Container Platform managed clusters that are in the OpenShift Container Platform Dedicated environment.

1.4.3.2.12. Non-OpenShift Container Platform managed clusters require ManagedServiceAccount or LoadBalancer for pod logs

The ManagedServiceAccount and cluster proxy add-ons are enabled by default in Red Hat Advanced Cluster Management version 2.10 and newer. If the add-ons are disabled after upgrading, you must enable the ManagedServiceAccount and cluster proxy add-ons manually to use the pod log feature on non-OpenShift Container Platform managed clusters.

See ManagedServiceAccount add-on to learn how to enable ManagedServiceAccount and see Using cluster proxy add-ons to learn how to enable a cluster proxy add-on.

1.4.3.2.13. OpenShift Container Platform 4.10.z does not support hosted control plane clusters with proxy configuration

When you create a hosting service cluster with a cluster-wide proxy configuration on OpenShift Container Platform 4.10.z, the nodeip-configuration.service service does not start on the worker nodes.

1.4.3.2.14. Client cannot reach iPXE script

iPXE is an open source network boot firmware. See iPXE for more details.

When booting a node, the URL length limitation in some DHCP servers cuts off the ipxeScript URL in the InfraEnv custom resource definition, resulting in the following error message in the console:

no bootable devices

To work around the issue, complete the following steps:

  1. Apply the InfraEnv custom resource definition when using an assisted installation to expose the bootArtifacts, which might resemble the following file:

    status:
      agentLabelSelector:
        matchLabels:
          infraenvs.agent-install.openshift.io: qe2
      bootArtifacts:
        initrd: https://assisted-image-service-multicluster-engine.redhat.com/images/0000/pxe-initrd?api_key=0000000&arch=x86_64&version=4.11
        ipxeScript: https://assisted-service-multicluster-engine.redhat.com/api/assisted-install/v2/infra-envs/00000/downloads/files?api_key=000000000&file_name=ipxe-script
        kernel: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/latest/rhcos-live-kernel-x86_64
        rootfs: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/latest/rhcos-live-rootfs.x86_64.img
  2. Create a proxy server to expose the bootArtifacts with short URLs.
  3. Copy the bootArtifacts and add them them to the proxy by running the following commands:

    for artifact in oc get infraenv qe2 -ojsonpath="{.status.bootArtifacts}" | jq ". | keys[]" | sed "s/\"//g"
    do curl -k oc get infraenv qe2 -ojsonpath="{.status.bootArtifacts.${artifact}}"` -o $artifact
  4. Add the ipxeScript artifact proxy URL to the bootp parameter in libvirt.xml.
1.4.3.2.15. Cannot delete ClusterDeployment after upgrading Red Hat Advanced Cluster Management

If you are using the removed BareMetalAssets API in Red Hat Advanced Cluster Management 2.6, the ClusterDeployment cannot be deleted after upgrading to Red Hat Advanced Cluster Management 2.7 because the BareMetalAssets API is bound to the ClusterDeployment.

To work around the issue, run the following command to remove the finalizers before upgrading to Red Hat Advanced Cluster Management 2.7:

oc patch clusterdeployment <clusterdeployment-name> -p '{"metadata":{"finalizers":null}}' --type=merge
1.4.3.2.16. Managed cluster stuck in Pending status after deployment

The converged flow is the default process of provisioning. When you use the BareMetalHost resource for the Bare Metal Operator (BMO) to connect your host to a live ISO, the Ironic Python Agent does the following actions:

  • It runs the steps in the Bare Metal installer-provisioned-infrastructure.
  • It starts the Assisted Installer agent, and the agent handles the rest of the install and provisioning process.

If the Assisted Installer agent starts slowly and you deploy a managed cluster, the managed cluster might become stuck in the Pending status and not have any agent resources. You can work around the issue by disabling the converged flow.

Important: When you disable the converged flow, only the Assisted Installer agent runs in the live ISO, reducing the number of open ports and disabling any features you enabled with the Ironic Python Agent agent, including the following:

  • Pre-provisioning disk cleaning
  • iPXE boot firmware
  • BIOS configuration

To decide what port numbers you want to enable or disable without disabling the converged flow, see Network configuration.

To disable the converged flow, complete the following steps:

  1. Create the following ConfigMap on the hub cluster:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: my-assisted-service-config
      namespace: multicluster-engine
    data:
      ALLOW_CONVERGED_FLOW: "false" 1
    1
    When you set the parameter value to "false", you also disable any features enabled by the Ironic Python Agent.
  2. Apply the ConfigMap by running the following command:

    oc annotate --overwrite AgentServiceConfig agent unsupported.agent-install.openshift.io/assisted-service-configmap=my-assisted-service-config
1.4.3.2.17. ManagedClusterSet API specification limitation

The selectorType: LaberSelector setting is not supported when using the Clustersets API. The selectorType: ExclusiveClusterSetLabel setting is supported.

1.4.3.2.18. The Cluster curator does not support OpenShift Container Platform Dedicated clusters

When you upgrade an OpenShift Container Platform Dedicated cluster by using the ClusterCurator resource, the upgrade fails because the Cluster curator does not support OpenShift Container Platform Dedicated clusters.

1.4.3.2.19. Custom ingress domain is not applied correctly

You can specify a custom ingress domain by using the ClusterDeployment resource while installing a managed cluster, but the change is only applied after the installation by using the SyncSet resource. As a result, the spec field in the clusterdeployment.yaml file displays the custom ingress domain you specified, but the status still displays the default domain.

1.4.3.2.20. ManagedClusterAddon status becomes stuck

If you define configurations in the ManagedClusterAddon to override some configurations in the ClusterManagementAddon, the ManagedClusterAddon might become stuck at the following status:

progressing... mca and work configs mismatch

When you check the ManagedClusterAddon status, a part of the configurations has an empty spec hash, even if the configurations exist. See the following example:

status:
  conditions:
  - lastTransitionTime: "2024-09-09T16:08:42Z"
    message: progressing... mca and work configs mismatch
    reason: Progressing
    status: "True"
    type: Progressing
...
  configReferences:
  - desiredConfig:
      name: deploy-config
      namespace: open-cluster-management-hub
      specHash: b81380f1f1a1920388d90859a5d51f5521cecd77752755ba05ece495f551ebd0
    group: addon.open-cluster-management.io
    lastObservedGeneration: 1
    name: deploy-config
    namespace: open-cluster-management-hub
    resource: addondeploymentconfigs
  - desiredConfig:
      name: cluster-proxy
      specHash: ""
    group: proxy.open-cluster-management.io
    lastObservedGeneration: 1
    name: cluster-proxy
    resource: managedproxyconfigurations

To resolve the issue, delete the ManagedClusterAddon by running the following command to reinstall and recover the ManagedClusterAddon. Replace <cluster-name> with the ManagedClusterAddon namespace. Replace <addon-name> with the ManagedClusterAddon name:

oc -n <cluster-name> delete managedclusteraddon <addon-name>

1.4.3.3. Central infrastructure management

1.4.3.3.1. Cluster provisioning with infrastructure operator for Red Hat OpenShift fails

When creating OpenShift Container Platform clusters by using the infrastructure operator for Red Hat OpenShift, the file name of the ISO image might be too long. The long image name causes the image provisioning and the cluster provisioning to fail. To determine if this is the problem, complete the following steps:

  1. View the bare metal host information for the cluster that you are provisioning by running the following command:

    oc get bmh -n <cluster_provisioning_namespace>
  2. Run the describe command to view the error information:

    oc describe bmh -n <cluster_provisioning_namespace> <bmh_name>
  3. An error similar to the following example indicates that the length of the filename is the problem:

    Status:
      Error Count:    1
      Error Message:  Image provisioning failed: ... [Errno 36] File name too long ...

If this problem occurs, it is typically on the following versions of OpenShift Container Platform, because the infrastructure operator for Red Hat OpenShift was not using image service:

  • 4.8.17 and earlier
  • 4.9.6 and earlier

To avoid this error, upgrade your OpenShift Container Platform to version 4.8.18 or later, or 4.9.7 or later.

1.4.3.3.2. Cannot use host inventory to boot with the discovery image and add hosts automatically

You cannot use a host inventory, or InfraEnv custom resource, to both boot with the discovery image and add hosts automatically. If you used your previous InfraEnv resource for the BareMetalHost resource, and you want to boot the image yourself, you can work around the issue by creating a new InfraEnv resource.

1.4.3.3.3. A single-node OpenShift cluster installation requires a matching OpenShift Container Platform with infrastructure operator for Red Hat OpenShift

If you want to install a single-node OpenShift cluster with an Red Hat OpenShift Container Platform version before 4.16, your InfraEnv custom resource and your booted host must use the same OpenShift Container Platform version that you are using to install the single-node OpenShift cluster. The installation fails if the versions do not match.

To work around the issue, edit your InfraEnv resource before you boot a host with the Discovery ISO, and include the following content:

apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
spec:
  osImageVersion: 4.15

The osImageVersion field must match the Red Hat OpenShift Container Platform cluster version that you want to install.

1.4.3.3.4. tolerations and nodeSelector settings do not affect the managed-serviceaccount agent

The tolerations and nodeSelector settings configured on the MultiClusterEngine and MultiClusterHub resources do not affect the managed-serviceaccount agent deployed on the local cluster. The managed-serviceaccount add-on is not always required on the local cluster.

If the managed-serviceaccount add-on is required, you can work around the issue by completing the following steps:

  1. Create the addonDeploymentConfig custom resource.
  2. Set the tolerations and nodeSelector values for the local cluster and managed-serviceaccount agent.
  3. Update the managed-serviceaccount ManagedClusterAddon in the local cluster namespace to use the addonDeploymentConfig custom resource you created.

See Configuring nodeSelectors and tolerations for klusterlet add-ons to learn more about how to use the addonDeploymentConfig custom resource to configure tolerations and nodeSelector for add-ons.

1.4.4. Deprecations and removals for Cluster lifecycle with multicluster engine operator

Learn when parts of the product are deprecated or removed from multicluster engine operator. Consider the alternative actions in the Recommended action and details, which display in the tables for the current release and for two prior releases. Tables are removed if no entries are added for that section this release.

Deprecated: multicluster engine operator 2.2 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates.

Best practice: Upgrade to the most recent version.

1.4.4.1. API deprecations and removals

multicluster engine operator follows the Kubernetes deprecation guidelines for APIs. See the Kubernetes Deprecation Policy for more details about that policy. multicluster engine operator APIs are only deprecated or removed outside of the following timelines:

  • All V1 APIs are generally available and supported for 12 months or three releases, whichever is greater. V1 APIs are not removed, but can be deprecated outside of that time limit.
  • All beta APIs are generally available for nine months or three releases, whichever is greater. Beta APIs are not removed outside of that time limit.
  • All alpha APIs are not required to be supported, but might be listed as deprecated or removed if it benefits users.
1.4.4.1.1. API deprecations
Product or categoryAffected itemVersionRecommended actionMore details and links

ManagedServiceAccount

The v1alpha1 API is upgraded to v1beta1 because v1alpha1 is deprecated.

2.4

Use v1beta1.

None

KlusterletConfig

The hubKubeAPIServerProxyConfig field is deprecated in the KlusterletConfig spec.

2.7

Use the hubKubeAPIServerConfig.proxyURL and hubKubeAPIServerConfig.trustedCABundles fields.

None

KlusterletConfig

The hubKubeAPIServerURL field is deprecated in the KlusterletConfig spec.

2.7

Use the hubKubeAPIServerConfig.url field.

None

KlusterletConfig

The hubKubeAPIServerCABundle field is deprecated in the KlusterletConfig spec

2.7

Use the hubKubeAPIServerConfig.serverVerificationStrategy and hubKubeAPIServerConfig.trustedCABundles fields.

None

1.4.4.2. Removals

A removed item is typically function that was deprecated in previous releases and is no longer available in the product. You must use alternatives for the removed function. Consider the alternative actions in the Recommended action and details that are provided in the following table:

Product or categoryAffected itemVersionRecommended actionMore details and links

Cluster lifecycle

Create cluster on Red Hat Virtualization

2.6

None

None

Cluster lifecycle

Klusterlet Operator Lifecycle Manager Operator

2.6

None

None

1.5. Installing and upgrading multicluster engine operator

The multicluster engine operator is a software operator that enhances cluster fleet management. The multicluster engine operator supportsRed Hat OpenShift Container Platform and Kubernetes cluster lifecycle management across clouds and data centers.

The documentation references the earliest supported OpenShift Container Platform version, unless a specific component or function is introduced and tested only on a more recent version of OpenShift Container Platform.

For full support information, see the multicluster engine operator Support matrix. For life cycle information, see Red Hat OpenShift Container Platform Life Cycle policy.

Important: If you are using Red Hat Advanced Cluster Management, then multicluster engine for Kubernetes operator is already installed on the cluster.

Deprecated: multicluster engine operator 2.2 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates.

Best practice: Upgrade to the most recent version.

See the following documentation:

1.5.1. Installing while connected online

The multicluster engine operator is installed with Operator Lifecycle Manager, which manages the installation, upgrade, and removal of the components that encompass the multicluster engine operator.

Required access: Cluster administrator

Important:

  • For OpenShift Container Platform Dedicated environment, you must have cluster-admin permissions. By default dedicated-admin role does not have the required permissions to create namespaces in the OpenShift Container Platform Dedicated environment.
  • By default, the multicluster engine operator components are installed on worker nodes of your OpenShift Container Platform cluster without any additional configuration. You can install multicluster engine operator onto worker nodes by using the OpenShift Container Platform OperatorHub web console interface, or by using the OpenShift Container Platform CLI.
  • If you have configured your OpenShift Container Platform cluster with infrastructure nodes, you can install multicluster engine operator onto those infrastructure nodes by using the OpenShift Container Platform CLI with additional resource parameters. See the Installing multicluster engine on infrastructure nodes section for those details.
  • If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or multicluster engine for Kubernetes operator, you will need to configure an image pull secret. For information on how to configure an image pull secret and other advanced configurations, see options in the Advanced configuration section of this documentation.

1.5.1.1. Prerequisites

Before you install multicluster engine for Kubernetes operator, see the following requirements:

  • Your Red Hat OpenShift Container Platform cluster must have access to the multicluster engine operator in the OperatorHub catalog from the OpenShift Container Platform console.
  • You need access to the catalog.redhat.com.
  • A supported version of OpenShift Container Platform must be deployed in your environment, and you must be logged into with the OpenShift Container Platform CLI. See the following install documentation:

  • Your OpenShift Container Platform command line interface (CLI) must be configured to run oc commands. See Getting started with the CLI for information about installing and configuring the OpenShift Container Platform CLI.
  • Your OpenShift Container Platform permissions must allow you to create a namespace.
  • You must have an Internet connection to access the dependencies for the operator.
  • To install in a OpenShift Container Platform Dedicated environment, see the following:

    • You must have the OpenShift Container Platform Dedicated environment configured and running.
    • You must have cluster-admin authority to the OpenShift Container Platform Dedicated environment where you are installing the engine.
  • If you plan to create managed clusters by using the Assisted Installer that is provided with Red Hat OpenShift Container Platform, see Preparing to install with the Assisted Installer topic in the OpenShift Container Platform documentation for the requirements.

1.5.1.2. Confirm your OpenShift Container Platform installation

You must have a supported OpenShift Container Platform version, including the registry and storage services, installed and working. For more information about installing OpenShift Container Platform, see the OpenShift Container Platform documentation.

  1. Verify that multicluster engine operator is not already installed on your OpenShift Container Platform cluster. The multicluster engine operator allows only one single installation on each OpenShift Container Platform cluster. Continue with the following steps if there is no installation.
  2. To ensure that the OpenShift Container Platform cluster is set up correctly, access the OpenShift Container Platform web console with the following command:

    kubectl -n openshift-console get route console

    See the following example output:

    console console-openshift-console.apps.new-coral.purple-chesterfield.com
    console   https   reencrypt/Redirect     None
  3. Open the URL in your browser and check the result. If the console URL displays console-openshift-console.router.default.svc.cluster.local, set the value for openshift_master_default_subdomain when you install OpenShift Container Platform. See the following example of a URL: https://console-openshift-console.apps.new-coral.purple-chesterfield.com.

You can proceed to install multicluster engine operator.

1.5.1.3. Installing from the OperatorHub web console interface

Best practice: From the Administrator view in your OpenShift Container Platform navigation, install the OperatorHub web console interface that is provided with OpenShift Container Platform.

  1. Select Operators > OperatorHub to access the list of available operators, and select multicluster engine for Kubernetes operator.
  2. Click Install.
  3. On the Operator Installation page, select the options for your installation:

    • Namespace:

      • The multicluster engine operator engine must be installed in its own namespace, or project.
      • By default, the OperatorHub console installation process creates a namespace titled multicluster-engine. Best practice: Continue to use the multicluster-engine namespace if it is available.
      • If there is already a namespace named multicluster-engine, select a different namespace.
    • Channel: The channel that you select corresponds to the release that you are installing. When you select the channel, it installs the identified release, and establishes that the future errata updates within that release are obtained.
    • Approval strategy: The approval strategy identifies the human interaction that is required for applying updates to the channel or release to which you subscribed.

      • Select Automatic, which is selected by default, to ensure any updates within that release are automatically applied.
      • Select Manual to receive a notification when an update is available. If you have concerns about when the updates are applied, this might be best practice for you.

    Note: To upgrade to the next minor release, you must return to the OperatorHub page and select a new channel for the more current release.

  4. Select Install to apply your changes and create the operator.
  5. See the following process to create the MultiClusterEngine custom resource.

    1. In the OpenShift Container Platform console navigation, select Installed Operators > multicluster engine for Kubernetes.
    2. Select the MultiCluster Engine tab.
    3. Select Create MultiClusterEngine.
    4. Update the default values in the YAML file. See options in the MultiClusterEngine advanced configuration section of the documentation.

      • The following example shows the default template that you can copy into the editor:
      apiVersion: multicluster.openshift.io/v1
      kind: MultiClusterEngine
      metadata:
        name: multiclusterengine
      spec: {}
  6. Select Create to initialize the custom resource. It can take up to 10 minutes for the multicluster engine operator engine to build and start.

    After the MultiClusterEngine resource is created, the status for the resource is Available on the MultiCluster Engine tab.

1.5.1.4. Installing from the OpenShift Container Platform CLI

  1. Create a multicluster engine operator engine namespace where the operator requirements are contained. Run the following command, where namespace is the name for your multicluster engine for Kubernetes operator namespace. The value for namespace might be referred to as Project in the OpenShift Container Platform environment:

    oc create namespace <namespace>
  2. Switch your project namespace to the one that you created. Replace namespace with the name of the multicluster engine for Kubernetes operator namespace that you created in step 1.

    oc project <namespace>
  3. Create a YAML file to configure an OperatorGroup resource. Each namespace can have only one operator group. Replace default with the name of your operator group. Replace namespace with the name of your project namespace. See the following example:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: <default>
      namespace: <namespace>
    spec:
      targetNamespaces:
      - <namespace>
  4. Run the following command to create the OperatorGroup resource. Replace operator-group with the name of the operator group YAML file that you created:

    oc apply -f <path-to-file>/<operator-group>.yaml
  5. Create a YAML file to configure an OpenShift Container Platform Subscription. Your file appears similar to the following example:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: multicluster-engine
    spec:
      sourceNamespace: openshift-marketplace
      source: redhat-operators
      channel: stable-2.7
      installPlanApproval: Automatic
      name: multicluster-engine

    Note: For installing the multicluster engine for Kubernetes operator on infrastructure nodes, the see Operator Lifecycle Manager Subscription additional configuration section.

  6. Run the following command to create the OpenShift Container Platform Subscription. Replace subscription with the name of the subscription file that you created:

    oc apply -f <path-to-file>/<subscription>.yaml
  7. Create a YAML file to configure the MultiClusterEngine custom resource. Your default template should look similar to the following example:

    apiVersion: multicluster.openshift.io/v1
    kind: MultiClusterEngine
    metadata:
      name: multiclusterengine
    spec: {}

    Note: For installing the multicluster engine operator on infrastructure nodes, see the MultiClusterEngine custom resource additional configuration section:

  8. Run the following command to create the MultiClusterEngine custom resource. Replace custom-resource with the name of your custom resource file:

    oc apply -f <path-to-file>/<custom-resource>.yaml

    If this step fails with the following error, the resources are still being created and applied. Run the command again in a few minutes when the resources are created:

    error: unable to recognize "./mce.yaml": no matches for kind "MultiClusterEngine" in version "operator.multicluster-engine.io/v1"
  9. Run the following command to get the custom resource. It can take up to 10 minutes for the MultiClusterEngine custom resource status to display as Available in the status.phase field after you run the following command:

    oc get mce -o=jsonpath='{.items[0].status.phase}'

If you are reinstalling the multicluster engine operator and the pods do not start, see Troubleshooting reinstallation failure for steps to work around this problem.

Notes:

  • A ServiceAccount with a ClusterRoleBinding automatically gives cluster administrator privileges to multicluster engine operator and to any user credentials with access to the namespace where you install multicluster engine operator.

1.5.1.5. Installing on infrastructure nodes

An OpenShift Container Platform cluster can be configured to contain infrastructure nodes for running approved management components. Running components on infrastructure nodes avoids allocating OpenShift Container Platform subscription quota for the nodes that are running those management components.

After adding infrastructure nodes to your OpenShift Container Platform cluster, follow the Installing from the OpenShift Container Platform CLI instructions and add the following configurations to the Operator Lifecycle Manager Subscription and MultiClusterEngine custom resource.

1.5.1.5.1. Add infrastructure nodes to the OpenShift Container Platform cluster

Follow the procedures that are described in Creating infrastructure machine sets in the OpenShift Container Platform documentation. Infrastructure nodes are configured with a Kubernetes taint and label to keep non-management workloads from running on them.

To be compatible with the infrastructure node enablement provided by multicluster engine operator, ensure your infrastructure nodes have the following taint and label applied:

metadata:
  labels:
    node-role.kubernetes.io/infra: ""
spec:
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/infra
1.5.1.5.2. Operator Lifecycle Manager Subscription additional configuration

Add the following additional configuration before applying the Operator Lifecycle Manager Subscription:

spec:
  config:
    nodeSelector:
      node-role.kubernetes.io/infra: ""
    tolerations:
    - key: node-role.kubernetes.io/infra
      effect: NoSchedule
      operator: Exists
1.5.1.5.3. MultiClusterEngine custom resource additional configuration

Add the following additional configuration before applying the MultiClusterEngine custom resource:

spec:
  nodeSelector:
    node-role.kubernetes.io/infra: ""

1.5.2. Install on disconnected networks

You might need to install the multicluster engine operator on Red Hat OpenShift Container Platform clusters that are not connected to the Internet. The procedure to install on a disconnected engine requires some of the same steps as the connected installation.

Important: You must install multicluster engine operator on a cluster that does not have Red Hat Advanced Cluster Management for Kubernetes earlier than 2.5 installed. The multicluster engine operator cannot co-exist with Red Hat Advanced Cluster Management for Kubernetes on versions earlier than 2.5 because they provide some of the same management components. It is recommended that you install multicluster engine operator on a cluster that has never previously installed Red Hat Advanced Cluster Management. If you are using Red Hat Advanced Cluster Management for Kubernetes at version 2.5.0 or later then multicluster engine operator is already installed on the cluster with it.

You must download copies of the packages to access them during the installation, rather than accessing them directly from the network during the installation.

1.5.2.1. Prerequisites

You must meet the following requirements before you install The multicluster engine operator:

  • A supported OpenShift Container Platform version must be deployed in your environment, and you must be logged in with the command line interface (CLI).
  • You need access to catalog.redhat.com.

    Note: For managing bare metal clusters, you need a supported OpenShift Container Platform version.

    See the OpenShift Container Platform Installing.

  • Your Red Hat OpenShift Container Platform permissions must allow you to create a namespace.
  • You must have a workstation with Internet connection to download the dependencies for the operator.

1.5.2.2. Confirm your OpenShift Container Platform installation

  • You must have a supported OpenShift Container Platform version, including the registry and storage services, installed and working in your cluster. For information about OpenShift Container Platform, see OpenShift Container Platform documentation.
  • When and if you are connected, accessing the OpenShift Container Platform web console with the following command to verify:

    oc -n openshift-console get route console

    See the following example output:

    console console-openshift-console.apps.new-name.purple-name.com
    console   https   reencrypt/Redirect     None

    The console URL in this example is: https:// console-openshift-console.apps.new-coral.purple-chesterfield.com. Open the URL in your browser and check the result.

    If the console URL displays console-openshift-console.router.default.svc.cluster.local, set the value for openshift_master_default_subdomain when you install OpenShift Container Platform.

1.5.2.3. Installing in a disconnected environment

Important: You need to download the required images to a mirroring registry to install the operators in a disconnected environment. Without the download, you might receive ImagePullBackOff errors during your deployment.

Follow these steps to install the multicluster engine operator in a disconnected environment:

  1. Create a mirror registry. If you do not already have a mirror registry, create one by completing the procedure in the Disconnected installation mirroring topic of the Red Hat OpenShift Container Platform documentation.

    If you already have a mirror registry, you can configure and use your existing one.

  2. Note: For bare metal only, you need to provide the certificate information for the disconnected registry in your install-config.yaml file. To access the image in a protected disconnected registry, you must provide the certificate information so the multicluster engine operator can access the registry.

    1. Copy the certificate information from the registry.
    2. Open the install-config.yaml file in an editor.
    3. Find the entry for additionalTrustBundle: |.
    4. Add the certificate information after the additionalTrustBundle line. The resulting content should look similar to the following example:

      additionalTrustBundle: |
        -----BEGIN CERTIFICATE-----
        certificate_content
        -----END CERTIFICATE-----
      sshKey: >-
  3. Important: Additional mirrors for disconnected image registries are needed if the following Governance policies are required:

    • Container Security Operator policy: Locate the images in the registry.redhat.io/quay source.
    • Compliance Operator policy: Locate the images in the registry.redhat.io/compliance source.
    • Gatekeeper Operator policy: Locate the images in the registry.redhat.io/gatekeeper source.

      See the following example of mirrors lists for all three operators:

        - mirrors:
          - <your_registry>/rhacm2
          source: registry.redhat.io/rhacm2
        - mirrors:
          - <your_registry>/quay
          source: registry.redhat.io/quay
        - mirrors:
          - <your_registry>/compliance
          source: registry.redhat.io/compliance
  4. Save the install-config.yaml file.
  5. Create a YAML file that contains the ImageContentSourcePolicy with the name mce-policy.yaml. Note: If you modify this on a running cluster, it causes a rolling restart of all nodes.

    apiVersion: operator.openshift.io/v1alpha1
    kind: ImageContentSourcePolicy
    metadata:
      name: mce-repo
    spec:
      repositoryDigestMirrors:
      - mirrors:
        - mirror.registry.com:5000/multicluster-engine
        source: registry.redhat.io/multicluster-engine
  6. Apply the ImageContentSourcePolicy file by entering the following command:

    oc apply -f mce-policy.yaml
  7. Enable the disconnected Operator Lifecycle Manager Red Hat Operators and Community Operators.

    the multicluster engine operator is included in the Operator Lifecycle Manager Red Hat Operator catalog.

  8. Configure the disconnected Operator Lifecycle Manager for the Red Hat Operator catalog. Follow the steps in the Using Operator Lifecycle Manager on restricted networks topic of theRed Hat OpenShift Container Platform documentation.
  9. Continue to install the multicluster engine operator for Kubernetes from the Operator Lifecycle Manager catalog.

See Installing while connected online for the required steps.

1.5.3. Advanced configuration

The multicluster engine operator is installed using an operator that deploys all of the required components. The multicluster engine operator can be further configured during or after installation. Learn more about the advanced configuration options.

1.5.3.1. Deployed components

Add one or more of the following attributes to the MultiClusterEngine custom resource:

Table 1.3. Table list of the deployed components

Name

Description

Enabled

assisted-service

Installs OpenShift Container Platform with minimal infrastructure prerequisites and comprehensive pre-flight validations

True

cluster-lifecycle

Provides cluster management capabilities for OpenShift Container Platform and Kubernetes hub clusters

True

cluster-manager

Manages various cluster-related operations within the cluster environment

True

cluster-proxy-addon

Automates the installation of apiserver-network-proxy on both hub and managed clusters using a reverse proxy server

True

console-mce

Enables the multicluster engine operator console plug-in

True

discovery

Discovers and identifies new clusters within the OpenShift Cluster Manager

True

hive

Provisions and performs initial configuration of OpenShift Container Platform clusters

True

hypershift

Hosts OpenShift Container Platform control planes at scale with cost and time efficiency, and cross-cloud portability

True

hypershift-local-hosting

Enables local hosting capabilities for within the local cluster environment

True

local-cluster

Enables the import and self-management of the local hub cluster where the multicluster engine operator is deployed

True

managedserviceacccount

Syncronizes service accounts to managed clusters and collects tokens as secret resources back to the hub cluster

False

server-foundation

Provides foundational services for server-side operations within the multicluster environment

True

When you install multicluster engine operator on to the cluster, not all of the listed components are enabled by default.

You can further configure multicluster engine operator during or after installation by adding one or more attributes to the MultiClusterEngine custom resource. Continue reading for information about the attributes that you can add.

1.5.3.2. Console and component configuration

The following example displays the spec.overrides default template that you can use to enable or disable the component:

apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  overrides:
    components:
    - name: <name> 1
      enabled: true
  1. Replace name with the name of the component.

Alternatively, you can run the following command. Replace namespace with the name of your project and name with the name of the component:

oc patch MultiClusterEngine <multiclusterengine-name> --type=json -p='[{"op": "add", "path": "/spec/overrides/components/-","value":{"name":"<name>","enabled":true}}]'

1.5.3.3. Local-cluster enablement

By default, the cluster that is running multicluster engine operator manages itself. To install multicluster engine operator without the cluster managing itself, specify the following values in the spec.overrides.components settings in the MultiClusterEngine section:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  overrides:
    components:
    - name: local-cluster
      enabled: false
  • The name value identifies the hub cluster as a local-cluster.
  • The enabled setting specifies whether the feature is enabled or disabled. When the value is true, the hub cluster manages itself. When the value is false, the hub cluster does not manage itself.

A hub cluster that is managed by itself is designated as the local-cluster in the list of clusters.

1.5.3.4. Custom image pull secret

If you plan to import Kubernetes clusters that were not created by OpenShift Container Platform or the multicluster engine operator, generate a secret that contains your OpenShift Container Platform pull secret information to access the entitled content from the distribution registry.

The secret requirements for OpenShift Container Platform clusters are automatically resolved by OpenShift Container Platform and multicluster engine for Kubernetes operator, so you do not have to create the secret if you are not importing other types of Kubernetes clusters to be managed.

Important: These secrets are namespace-specific, so make sure that you are in the namespace that you use for your engine.

  1. Download your OpenShift Container Platform pull secret file from cloud.redhat.com/openshift/install/pull-secret by selecting Download pull secret. Your OpenShift Container Platform pull secret is associated with your Red Hat Customer Portal ID, and is the same across all Kubernetes providers.
  2. Run the following command to create your secret:

    oc create secret generic <secret> -n <namespace> --from-file=.dockerconfigjson=<path-to-pull-secret> --type=kubernetes.io/dockerconfigjson
    • Replace secret with the name of the secret that you want to create.
    • Replace namespace with your project namespace, as the secrets are namespace-specific.
    • Replace path-to-pull-secret with the path to your OpenShift Container Platform pull secret that you downloaded.

The following example displays the spec.imagePullSecret template to use if you want to use a custom pull secret. Replace secret with the name of your pull secret:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  imagePullSecret: <secret>

1.5.3.5. Target namespace

The operands can be installed in a designated namespace by specifying a location in the MultiClusterEngine custom resource. This namespace is created upon application of the MultiClusterEngine custom resource.

Important: If no target namespace is specified, the operator will install to the multicluster-engine namespace and will set it in the MultiClusterEngine custom resource specification.

The following example displays the spec.targetNamespace template that you can use to specify a target namespace. Replace target with the name of your destination namespace. Note: The target namespace cannot be the default namespace:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  targetNamespace: <target>

1.5.3.6. availabilityConfig

The hub cluster has two availabilities: High and Basic. By default, the hub cluster has an availability of High, which gives hub cluster components a replicaCount of 2. This provides better support in cases of failover but consumes more resources than the Basic availability, which gives components a replicaCount of 1.

Important: Set spec.availabilityConfig to Basic if you are using multicluster engine operator on a single-node OpenShift cluster.

The following examples shows the spec.availabilityConfig template with Basic availability:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  availabilityConfig: "Basic"

1.5.3.7. nodeSelector

You can define a set of node selectors in the MultiClusterEngine to install to specific nodes on your cluster. The following example shows spec.nodeSelector to assign pods to nodes with the label node-role.kubernetes.io/infra:

spec:
  nodeSelector:
    node-role.kubernetes.io/infra: ""

To define a set of node selectors for the Red Hat Advanced Cluster Management for Kubernetes hub cluster, see nodeSelector in the Red Hat Advanced Cluster Management documentation.

1.5.3.8. tolerations

You can define a list of tolerations to allow the MultiClusterEngine to tolerate specific taints defined on the cluster. The following example shows a spec.tolerations that matches a node-role.kubernetes.io/infra taint:

spec:
  tolerations:
  - key: node-role.kubernetes.io/infra
    effect: NoSchedule
    operator: Exists

The previous infra-node toleration is set on pods by default without specifying any tolerations in the configuration. Customizing tolerations in the configuration will replace this default behavior.

To define a list of tolerations for the Red Hat Advanced Cluster Management for Kubernetes hub cluster,, see tolerations in the Red Hat Advanced Cluster Management documentation.

1.5.3.9. ManagedServiceAccount add-on

The ManagedServiceAccount add-on allows you to create or delete a service account on a managed cluster. To install with this add-on enabled, include the following in the MultiClusterEngine specification in spec.overrides:

apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
  name: multiclusterengine
spec:
  overrides:
    components:
    - name: managedserviceaccount
      enabled: true

The ManagedServiceAccount add-on can be enabled after creating MultiClusterEngine by editing the resource on the command line and setting the managedserviceaccount component to enabled: true. Alternatively, you can run the following command and replace <multiclusterengine-name> with the name of your MultiClusterEngine resource.

oc patch MultiClusterEngine <multiclusterengine-name> --type=json -p='[{"op": "add", "path": "/spec/overrides/components/-","value":{"name":"managedserviceaccount","enabled":true}}]'

1.5.4. Uninstalling

When you uninstall multicluster engine for Kubernetes operator, you see two different levels of the process: A custom resource removal and a complete operator uninstall. It might take up to five minutes to complete the uninstall process.

  • The custom resource removal is the most basic type of uninstall that removes the custom resource of the MultiClusterEngine instance but leaves other required operator resources. This level of uninstall is helpful if you plan to reinstall using the same settings and components.
  • The second level is a more complete uninstall that removes most operator components, excluding components such as custom resource definitions. When you continue with this step, it removes all of the components and subscriptions that were not removed with the custom resource removal. After this uninstall, you must reinstall the operator before reinstalling the custom resource.

1.5.4.1. Prerequisite: Detach enabled services

Before you uninstall the multicluster engine for Kubernetes operator, you must detach all of the clusters that are managed by that engine. To avoid errors, detach all clusters that are still managed by the engine, then try to uninstall again.

  • If you have managed clusters attached, you might see the following message.

    Cannot delete MultiClusterEngine resource because ManagedCluster resource(s) exist

    For more information about detaching clusters, see the Removing a cluster from management section by selecting the information for your provider in Creating clusters.

1.5.4.2. Removing resources by using commands

  1. If you have not already. ensure that your OpenShift Container Platform CLI is configured to run oc commands. See Getting started with the OpenShift CLI in the OpenShift Container Platform documentation for more information about how to configure the oc commands.
  2. Change to your project namespace by entering the following command. Replace namespace with the name of your project namespace:

    oc project <namespace>
  3. Enter the following command to remove the MultiClusterEngine custom resource:

    oc delete multiclusterengine --all

    You can view the progress by entering the following command:

    oc get multiclusterengine -o yaml
  4. Enter the following commands to delete the multicluster-engine ClusterServiceVersion in the namespace it is installed in:
❯ oc get csv
NAME                         DISPLAY                              VERSION   REPLACES   PHASE
multicluster-engine.v2.0.0   multicluster engine for Kubernetes   2.0.0                Succeeded

❯ oc delete clusterserviceversion multicluster-engine.v2.0.0
❯ oc delete sub multicluster-engine

The CSV version shown here may be different.

1.5.4.3. Deleting the components by using the console

When you use the RedHat OpenShift Container Platform console to uninstall, you remove the operator. Complete the following steps to uninstall by using the console:

  1. In the OpenShift Container Platform console navigation, select Operators > Installed Operators > multicluster engine for Kubernetes.
  2. Remove the MultiClusterEngine custom resource.

    1. Select the tab for Multiclusterengine.
    2. Select the Options menu for the MultiClusterEngine custom resource.
    3. Select Delete MultiClusterEngine.
  3. Run the clean-up script according to the procedure in the following section.

    Tip: If you plan to reinstall the same multicluster engine for Kubernetes operator version, you can skip the rest of the steps in this procedure and reinstall the custom resource.

  4. Navigate to Installed Operators.
  5. Remove the _ multicluster engine for Kubernetes_ operator by selecting the Options menu and selecting Uninstall operator.

1.5.4.4. Troubleshooting Uninstall

If the multicluster engine custom resource is not being removed, remove any potential remaining artifacts by running the clean-up script.

  1. Copy the following script into a file:

    #!/bin/bash
    oc delete apiservice v1.admission.cluster.open-cluster-management.io v1.admission.work.open-cluster-management.io
    oc delete validatingwebhookconfiguration multiclusterengines.multicluster.openshift.io
    oc delete mce --all

See Disconnected installation mirroring for more information.

1.6. Managing credentials

A credential is required to create and manage a Red Hat OpenShift Container Platform cluster on a cloud service provider with multicluster engine operator. The credential stores the access information for a cloud provider. Each provider account requires its own credential, as does each domain on a single provider.

You can create and manage your cluster credentials. Credentials are stored as Kubernetes secrets. Secrets are copied to the namespace of a managed cluster so that the controllers for the managed cluster can access the secrets. When a credential is updated, the copies of the secret are automatically updated in the managed cluster namespaces.

Note: Changes to the pull secret, SSH keys, or base domain of the cloud provider credentials are not reflected for existing managed clusters, as they have already been provisioned using the original credentials.

Required access: Edit

1.6.1. Creating a credential for Amazon Web Services

You need a credential to use multicluster engine operator console to deploy and manage an Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS).

Required access: Edit

Note: This procedure must be done before you can create a cluster with multicluster engine operator.

1.6.1.1. Prerequisites

You must have the following prerequisites before creating a credential:

  • A deployed multicluster engine operator hub cluster
  • Internet access for your multicluster engine operator hub cluster so it can create the Kubernetes cluster on Amazon Web Services (AWS)
  • AWS login credentials, which include access key ID and secret access key. See Understanding and getting your security credentials.
  • Account permissions that allow installing clusters on AWS. See Configuring an AWS account for instructions on how to configure an AWS account.

1.6.1.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps:

  1. Add your AWS access key ID for your AWS account. See Log in to AWS to find your ID.
  2. Provide the contents for your new AWS Secret Access Key.
  3. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  4. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  5. Add your SSH private key and SSH public key, which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program.

You can create a cluster that uses this credential by completing the steps in Creating a cluster on Amazon Web Services or Creating a cluster on Amazon Web Services GovCloud.

You can edit your credential in the console. If the cluster was created by using this provider connection, then the <cluster-name>-aws-creds> secret from <cluster-namespace> will get updated with the new credentials.

Note: Updating credentials does not work for cluster pool claimed clusters.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.6.1.2.1. Creating an S3 secret

To create an Amazon Simple Storage Service (S3) secret, complete the following task from the console:

  1. Click Add credential > AWS > S3 Bucket. If you click For Hosted Control Plane, the name and namespace are provided.
  2. Enter information for the following fields that are provided:

    • bucket name: Add the name of the S3 bucket.
    • aws_access_key_id: Add your AWS access key ID for your AWS account. Log in to AWS to find your ID.
    • aws_secret_access_key: Provide the contents for your new AWS Secret Access Key.
    • Region: Enter your AWS region.

1.6.1.3. Creating an opaque secret by using the API

To create an opaque secret for Amazon Web Services by using the API, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-aws-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    aws_access_key_id: $(echo -n "${AWS_KEY}" | base64 -w0)
    aws_secret_access_key: $(echo -n "${AWS_SECRET}" | base64 -w0)

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.
  • Add labels to your credentials to view your secret in the console. For example, the following AWS S3 Bucket oc label secret is appended with type=awss3 and credentials --from-file=…​.:
oc label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster "cluster.open-cluster-management.io/type=awss3"
oc label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster "cluster.open-cluster-management.io/credentials=credentials="

1.6.1.4. Additional resources

1.6.2. Creating a credential for Microsoft Azure

You need a credential to use multicluster engine operator console to create and manage a Red Hat OpenShift Container Platform cluster on Microsoft Azure or on Microsoft Azure Government.

Required access: Edit

Note: This procedure is a prerequisite for creating a cluster with multicluster engine operator.

1.6.2.1. Prerequisites

You must have the following prerequisites before creating a credential:

  • A deployed multicluster engine operator hub cluster.
  • Internet access for your multicluster engine operator hub cluster so that it can create the Kubernetes cluster on Azure.
  • Azure login credentials, which include your Base Domain Resource Group and Azure Service Principal JSON. See Microsoft Azure portal to get your login credentials.
  • Account permissions that allow installing clusters on Azure. See How to configure Cloud Services and Configuring an Azure account for more information.

1.6.2.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console. Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

  1. Optional: Add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential.
  2. Select whether the environment for your cluster is AzurePublicCloud or AzureUSGovernmentCloud. The settings are different for the Azure Government environment, so ensure that this is set correctly.
  3. Add your Base domain resource group name for your Azure account. This entry is the resource name that you created with your Azure account. You can find your Base Domain Resource Group Name by selecting Home > DNS Zones in the Azure interface. See Create an Azure service principal with the Azure CLI to find your base domain resource group name.
  4. Provide the contents for your Client ID. This value is generated as the appId property when you create a service principal with the following command:

    az ad sp create-for-rbac --role Contributor --name <service_principal> --scopes <subscription_path>

    Replace service_principal with the name of your service principal.

  5. Add your Client Secret. This value is generated as the password property when you create a service principal with the following command:

    az ad sp create-for-rbac --role Contributor --name <service_principal> --scopes <subscription_path>

    Replace service_principal with the name of your service principal.

  6. Add your Subscription ID. This value is the id property in the output of the following command:

    az account show
  7. Add your Tenant ID. This value is the tenantId property in the output of the following command:

    az account show
  8. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  9. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  10. Add your SSH private key and SSH public key to use to connect to the cluster. You can use an existing key pair, or create a new pair using a key generation program.

You can create a cluster that uses this credential by completing the steps in Creating a cluster on Microsoft Azure.

You can edit your credential in the console.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.6.2.3. Creating an opaque secret by using the API

To create an opaque secret for Microsoft Azure by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-azure-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    baseDomainResourceGroupName: $(echo -n "${azure_resource_group_name}" | base64 -w0)
    osServicePrincipal.json: $(base64 -w0 "${AZURE_CRED_JSON}")

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.

1.6.2.4. Additional resources

1.6.3. Creating a credential for Google Cloud Platform

You need a credential to use multicluster engine operator console to create and manage a Red Hat OpenShift Container Platform cluster on Google Cloud Platform (GCP).

Required access: Edit

Note: This procedure is a prerequisite for creating a cluster with multicluster engine operator.

1.6.3.1. Prerequisites

You must have the following prerequisites before creating a credential:

  • A deployed multicluster engine operator hub cluster
  • Internet access for your multicluster engine operator hub cluster so it can create the Kubernetes cluster on GCP
  • GCP login credentials, which include user Google Cloud Platform Project ID and Google Cloud Platform service account JSON key. See Creating and managing projects.
  • Account permissions that allow installing clusters on GCP. See Configuring a GCP project for instructions on how to configure an account.

1.6.3.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, for both convenience and security.

You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps:

  1. Add your Google Cloud Platform project ID for your GCP account. See Log in to GCP to retrieve your settings.
  2. Add your Google Cloud Platform service account JSON key. See the Create service accounts documentation to create your service account JSON key. Follow the steps for the GCP console.
  3. Provide the contents for your new Google Cloud Platform service account JSON key.
  4. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add and asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  5. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  6. Add your SSH private key and SSH public key so you can access the cluster. You can use an existing key pair, or create a new pair using a key generation program.

You can use this connection when you create a cluster by completing the steps in Creating a cluster on Google Cloud Platform.

You can edit your credential in the console.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.6.3.3. Creating an opaque secret by using the API

To create an opaque secret for Google Cloud Platform by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-gcp-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    osServiceAccount.json: $(base64 -w0 "${GCP_CRED_JSON}")

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.

1.6.3.4. Additional resources

Return to Creating a credential for Google Cloud Platform.

1.6.4. Creating a credential for VMware vSphere

You need a credential to use multicluster engine operator console to deploy and manage a Red Hat OpenShift Container Platform cluster on VMware vSphere.

Required access: Edit

1.6.4.1. Prerequisites

You must have the following prerequisites before you create a credential:

  • You must create a credential for VMware vSphere before you can create a cluster with multicluster engine operator.
  • A deployed hub cluster on a supported OpenShift Container Platform version.
  • Internet access for your hub cluster so it can create the Kubernetes cluster on VMware vSphere.
  • VMware vSphere login credentials and vCenter requirements configured for OpenShift Container Platform when using installer-provisioned infrastructure. See Installing a cluster on vSphere with customizations. These credentials include the following information:

    • vCenter account privileges.
    • Cluster resources.
    • DHCP available.
    • ESXi hosts have time synchronized (for example, NTP).

1.6.4.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. See the following steps:

  1. Add your VMware vCenter server fully-qualified host name or IP address. The value must be defined in the vCenter server root CA certificate. If possible, use the fully-qualified host name.
  2. Add your VMware vCenter username.
  3. Add your VMware vCenter password.
  4. Add your VMware vCenter root CA certificate.

    1. You can download your certificate in the download.zip package with the certificate from your VMware vCenter server at: https://<vCenter_address>/certs/download.zip. Replace vCenter_address with the address to your vCenter server.
    2. Unpackage the download.zip.
    3. Use the certificates from the certs/<platform> directory that have a .0 extension.

      Tip: You can use the ls certs/<platform> command to list all of the available certificates for your platform.

      Replace <platform> with the abbreviation for your platform: lin, mac, or win.

      For example: certs/lin/3a343545.0

      Best practice: Link together multiple certificates with a .0 extension by running the cat certs/lin/*.0 > ca.crt command.

    4. Add your VMware vSphere cluster name.
    5. Add your VMware vSphere datacenter.
    6. Add your VMware vSphere default datastore.
    7. Add your VMware vSphere disk type.
    8. Add your VMware vSphere folder.
    9. Add your VMware vSphere resource pool.
  5. For disconnected installations only: Complete the fields in the Configuration for disconnected installation subsection with the required information:

    • Cluster OS image: This value contains the URL to the image to use for Red Hat OpenShift Container Platform cluster machines.
    • Image content source: This value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example: repository.com:5000/openshift/ocp-release.

      The path creates an image content source policy mapping in the install-config.yaml to the Red Hat OpenShift Container Platform release images. As an example, repository.com:5000 produces this imageContentSource content:

      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-release-nightly
      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-release
      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
    • Additional trust bundle: This value provides the contents of the certificate file that is required to access the mirror registry.

      Note: If you are deploying managed clusters from a hub that is in a disconnected environment, and want them to be automatically imported post install, add an Image Content Source Policy to the install-config.yaml file by using the YAML editor. A sample entry is shown in the following example:

      - mirrors:
        - registry.example.com:5000/rhacm2
        source: registry.redhat.io/rhacm2
  6. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add and asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  7. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  8. Add your SSH private key and SSH public key, which allows you to connect to the cluster.

    You can use an existing key pair, or create a new one with key generation program.

You can create a cluster that uses this credential by completing the steps in Creating a cluster on VMware vSphere.

You can edit your credential in the console.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.6.4.3. Creating an opaque secret by using the API

To create an opaque secret for VMware vSphere by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-vsphere-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    username: $(echo -n "${VMW_USERNAME}" | base64 -w0)
    password.json: $(base64 -w0 "${VMW_PASSWORD}")

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.

1.6.4.4. Additional resources

1.6.5. Creating a credential for Red Hat OpenStack

You need a credential to use multicluster engine operator console to deploy and manage a supported Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform.

Notes: You must create a credential for Red Hat OpenStack Platform before you can create a cluster with multicluster engine operator.

1.6.5.1. Prerequisites

You must have the following prerequisites before you create a credential:

  • A deployed hub cluster on a supported OpenShift Container Platform version.
  • Internet access for your hub cluster so it can create the Kubernetes cluster on Red Hat OpenStack Platform.
  • Red Hat OpenStack Platform login credentials and Red Hat OpenStack Platform requirements configured for OpenShift Container Platform when using installer-provisioned infrastructure. See Installing a cluster on OpenStack with customizations.
  • Download or create a clouds.yaml file for accessing the CloudStack API. Within the clouds.yaml file:

    • Determine the cloud auth section name to use.
    • Add a line for the password, immediately following the username line.

1.6.5.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. To enhance security and convenience, you can create a namespace specifically to host your credentials.

  1. Optional: You can add a Base DNS domain for your credential. If you add the base DNS domain, it is automatically populated in the correct field when you create a cluster with this credential.
  2. Add your Red Hat OpenStack Platform clouds.yaml file contents. The contents of the clouds.yaml file, including the password, provide the required information for connecting to the Red Hat OpenStack Platform server. The file contents must include the password, which you add to a new line immediately after the username.
  3. Add your Red Hat OpenStack Platform cloud name. This entry is the name specified in the cloud section of the clouds.yaml to use for establishing communication to the Red Hat OpenStack Platform server.
  4. Optional: For configurations that use an internal certificate authority, enter your certificate in the Internal CA certificate field to automatically update your clouds.yaml with the certificate information.
  5. For disconnected installations only: Complete the fields in the Configuration for disconnected installation subsection with the required information:

    • Cluster OS image: This value contains the URL to the image to use for Red Hat OpenShift Container Platform cluster machines.
    • Image content sources: This value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example: repository.com:5000/openshift/ocp-release.

      The path creates an image content source policy mapping in the install-config.yaml to the Red Hat OpenShift Container Platform release images. As an example, repository.com:5000 produces this imageContentSource content:

      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-release-nightly
      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-release
      - mirrors:
        - registry.example.com:5000/ocp4
        source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
    • Additional trust bundle: This value provides the contents of the certificate file that is required to access the mirror registry.

      Note: If you are deploying managed clusters from a hub that is in a disconnected environment, and want them to be automatically imported post install, add an Image Content Source Policy to the install-config.yaml file by using the YAML editor. A sample entry is shown in the following example:

      - mirrors:
        - registry.example.com:5000/rhacm2
        source: registry.redhat.io/rhacm2
  6. If you want to enable a proxy, enter the proxy information:

    • HTTP proxy URL: The URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy URL: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
  7. Enter your Red Hat OpenShift pull secret. See Download your Red Hat OpenShift pull secret to download your pull secret.
  8. Add your SSH Private Key and SSH Public Key, which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program.
  9. Click Create.
  10. Review the new credential information, then click Add. When you add the credential, it is added to the list of credentials.

You can create a cluster that uses this credential by completing the steps in Creating a cluster on Red Hat OpenStack Platform.

You can edit your credential in the console.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.6.5.3. Creating an opaque secret by using the API

To create an opaque secret for Red Hat OpenStack Platform by using the API instead of the console, apply YAML content in the YAML preview window that is similar to the following example:

kind: Secret
metadata:
    name: <managed-cluster-name>-osp-creds
    namespace: <managed-cluster-namespace>
type: Opaque
data:
    clouds.yaml: $(base64 -w0 "${OSP_CRED_YAML}") cloud: $(echo -n "openstack" | base64 -w0)

Notes:

  • Opaque secrets are not visible in the console.
  • Opaque secrets are created in the managed cluster namespace you chose. Hive uses the opaque secret to provision the cluster. When provisioning the cluster by using the Red Hat Advanced Cluster Management console, the credentials you previoulsy created are copied to the managed cluster namespace as the opaque secret.

1.6.5.4. Additional resources

1.6.6. Creating a credential for Red Hat OpenShift Cluster Manager

Add an OpenShift Cluster Manager credential so that you can discover clusters.

Required access: Administrator

1.6.6.1. Prerequisite

You need an API token for the OpenShift Cluster Manager account, or you can use a separate Service Account.

1.6.6.2. Adding a credential by using the console

You need to add your credential to discover clusters. To create a credential from the multicluster engine operator console, complete the steps in the console:

  1. Log in to your cluster.
  2. Click Credentials > Credential type to choose from existing credential options.
  3. Create a namespace specifically to host your credentials, both for convenience and added security.
  4. Click Add credential.
  5. Select the Red Hat OpenShift Cluster Manager option.
  6. Select one of the authentication methods.

Notes:

  • When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential.
  • If your credential is removed, or your OpenShift Cluster Manager API token expires or is revoked, then the associated discovered clusters are removed.

1.6.7. Creating a credential for Ansible Automation Platform

You need a credential to use multicluster engine operator console to deploy and manage an Red Hat OpenShift Container Platform cluster that is using Red Hat Ansible Automation Platform.

Required access: Edit

Note: This procedure must be done before you can create an Automation template to enable automation on a cluster.

1.6.7.1. Prerequisites

You must have the following prerequisites before creating a credential:

  • A deployed multicluster engine operator hub cluster
  • Internet access for your multicluster engine operator hub cluster
  • Ansible login credentials, which includes Ansible Automation Platform hostname and OAuth token; see Credentials for Ansible Automation Platform.
  • Account permissions that allow you to install hub clusters and work with Ansible. Learn more about Ansible users.

1.6.7.2. Managing a credential by using the console

To create a credential from the multicluster engine operator console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

The Ansible Token and host URL that you provide when you create your Ansible credential are automatically updated for the automations that use that credential when you edit the credential. The updates are copied to any automations that use that Ansible credential, including those related to cluster lifecycle, governance, and application management automations. This ensures that the automations continue to run after the credential is updated.

You can edit your credential in the console. Ansible credentials are automatically updated in your automation that use that credential when you update them in the credential.

You can create an Ansible Job that uses this credential by completing the steps in Configuring Ansible Automation Platform tasks to run on managed clusters.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.6.8. Creating a credential for an on-premises environment

You need a credential to use the console to deploy and manage a Red Hat OpenShift Container Platform cluster in an on-premises environment. The credential specifies the connections that are used for the cluster.

Required access: Edit

1.6.8.1. Prerequisites

You need the following prerequisites before creating a credential:

  • A hub cluster that is deployed.
  • Internet access for your hub cluster so it can create the Kubernetes cluster on your infrastructure environment.
  • For a disconnected environment, you must have a configured mirror registry where you can copy the release images for your cluster creation. See Disconnected installation mirroring in the OpenShift Container Platform documentation for more information.
  • Account permissions that support installing clusters on the on-premises environment.

1.6.8.2. Managing a credential by using the console

To create a credential from the console, complete the steps in the console.

Start at the navigation menu. Click Credentials to choose from existing credential options. Tip: Create a namespace specifically to host your credentials, both for convenience and added security.

  1. Select Host inventory for your credential type.
  2. You can optionally add a Base DNS domain for your credential. If you add the base DNS domain to the credential, it is automatically populated in the correct field when you create a cluster with this credential. If you do not add the DNS domain, you can add it when you create your cluster.
  3. Enter your Red Hat OpenShift pull secret. This pull secret is automatically entered when you create a cluster and specify this credential. You can download your pull secret from Pull secret. See Using image pull secrets for more information about pull secrets.
  4. Enter your SSH public key. This SSH public key is also automatically entered when you create a cluster and specify this credential.
  5. Select Add to create your credential.

You can create a cluster that uses this credential by completing the steps in Creating a cluster in an on-premises environment.

When you are no longer managing a cluster that is using a credential, delete the credential to protect the information in the credential. Select Actions to delete in bulk, or select the options menu beside the credential that you want to delete.

1.7. Cluster lifecycle introduction

The multicluster engine operator is the cluster lifecycle operator that provides cluster management capabilities for OpenShift Container Platform and Red Hat Advanced Cluster Management hub clusters. The multicluster engine operator is a software operator that enhances cluster fleet management and supports OpenShift Container Platform cluster lifecycle management across clouds and data centers. You can use multicluster engine operator with or without Red Hat Advanced Cluster Management. Red Hat Advanced Cluster Management also installs multicluster engine operator automatically and offers further multicluster capabilities.

See the following documentation:

1.7.1. Cluster lifecycle architecture

Cluster lifecycle requires two types of clusters: hub clusters and managed clusters.

The hub cluster is the OpenShift Container Platform (or Red Hat Advanced Cluster Management) main cluster with the multicluster engine operator automatically installed. You can create, manage, and monitor other Kubernetes clusters with the hub cluster. You can create clusters by using the hub cluster, while you can also import existing clusters to be managed by the hub cluster.

When you create a managed cluster, the cluster is created using the Red Hat OpenShift Container Platform cluster installer with the Hive resource. You can find more information about the process of installing clusters with the OpenShift Container Platform installer by reading Installing and configuring OpenShift Container Platform clusters in the OpenShift Container Platform documentation.

The following diagram shows the components that are installed with the multicluster engine for Kubernetes operator for cluster management:

Cluster lifecycle architecture diagram

The components of the cluster lifecycle management architecture include the following items:

1.7.1.1. Hub cluster

  • The managed cluster import controller deploys the klusterlet operator to the managed clusters.
  • The Hive controller provisions the clusters that you create by using the multicluster engine for Kubernetes operator. The Hive Controller also destroys managed clusters that were created by the multicluster engine for Kubernetes operator.
  • The cluster curator controller creates the Ansible jobs as the pre-hook or post-hook to configure the cluster infrastructure environment when creating or upgrading managed clusters.
  • When a managed cluster add-on is enabled on the hub cluster, its add-on hub controller is deployed on the hub cluster. The add-on hub controller deploys the add-on agent to the managed clusters.

1.7.1.2. Managed cluster

  • The klusterlet operator deploys the registration and work controllers on the managed cluster.
  • The Registration Agent registers the managed cluster and the managed cluster add-ons with the hub cluster. The Registration Agent also maintains the status of the managed cluster and the managed cluster add-ons. The following permissions are automatically created within the Clusterrole to allow the managed cluster to access the hub cluster:

    • Allows the agent to get or update its owned cluster that the hub cluster manages
    • Allows the agent to update the status of its owned cluster that the hub cluster manages
    • Allows the agent to rotate its certificate
    • Allows the agent to get or update the coordination.k8s.io lease
    • Allows the agent to get its managed cluster add-ons
    • Allows the agent to update the status of its managed cluster add-ons
  • The work agent applies the Add-on Agent to the managed cluster. The permission to allow the managed cluster to access the hub cluster is automatically created within the Clusterrole and allows the agent to send events to the hub cluster.

To continue adding and managing clusters, see the Cluster lifecycle introduction.

1.7.2. Release images

When you build your cluster, use the version of Red Hat OpenShift Container Platform that the release image specifies. By default, OpenShift Container Platform uses the clusterImageSets resources to get the list of supported release images.

Continue reading to learn more about release images:

1.7.2.1. Specifying release images

When you create a cluster on a provider by using multicluster engine for Kubernetes operator, specify a release image to use for your new cluster. To specify a release image, see the following topics:

1.7.2.1.1. Locating ClusterImageSets

The YAML files referencing the release images are maintained in the acm-hive-openshift-releases GitHub repository. The files are used to create the list of the available release images in the console. This includes the latest fast channel images from OpenShift Container Platform.

The console only displays the latest release images for the three latest versions of OpenShift Container Platform. For example, you might see the following release image displayed in the console options:

quay.io/openshift-release-dev/ocp-release:4.15.1-x86_64

The console displays the latest versions to help you create a cluster with the latest release images. If you need to create a cluster that is a specific version, older release image versions are also available.

Note: You can only select images with the visible: 'true' label when creating clusters in the console. An example of this label in a ClusterImageSet resource is provided in the following content. Replace 4.x.1 with the current version of the product:

apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
  labels:
    channel: fast
    visible: 'true'
  name: img4.x.1-x86-64-appsub
spec:
  releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64

Additional release images are stored, but are not visible in the console. To view all of the available release images, run the following command:

oc get clusterimageset

The repository has the clusterImageSets directory, which is the directory that you use when working with the release images. The clusterImageSets directory has the following directories:

  • Fast: Contains files that reference the latest versions of the release images for each supported OpenShift Container Platform version. The release images in this folder are tested, verified, and supported.
  • Releases: Contains files that reference all of the release images for each OpenShift Container Platform version (stable, fast, and candidate channels)

    Note: These releases have not all been tested and determined to be stable.

  • Stable: Contains files that reference the latest two stable versions of the release images for each supported OpenShift Container Platform version..

    Note: By default, the current list of release images updates one time every hour. After upgrading the product, it might take up to one hour for the list to reflect the recommended release image versions for the new version of the product.

1.7.2.1.2. Configuring ClusterImageSets

You can configure your ClusterImageSets with the following options:

  • Option 1: To create a cluster in the console, specify the image reference for the specific ClusterImageSet that you want to us. Each new entry you specify persists and is available for all future cluster provisions See the following example entry:

    quay.io/openshift-release-dev/ocp-release:4.6.8-x86_64
  • Option 2: Manually create and apply a ClusterImageSets YAML file from the acm-hive-openshift-releases GitHub repository.
  • Option 3: To enable automatic updates of ClusterImageSets from a forked GitHub repository, follow the README.md in the cluster-image-set-controller GitHub repository.
1.7.2.1.3. Creating a release image to deploy a cluster on a different architecture

You can create a cluster on an architecture that is different from the architecture of the hub cluster by manually creating a release image that has the files for both architectures.

For example, you might need to create an x86_64 cluster from a hub cluster that is running on the ppc64le, aarch64, or s390x architecture. If you create the release image with both sets of files, the cluster creation succeeds because the new release image enables the OpenShift Container Platform release registry to provide a multi-architecture image manifest.

OpenShift Container Platform supports multiple architectures by default. You can use the following clusterImageSet to provision a cluster. Replace 4.x.0 with the current supported version:

apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
  labels:
    channel: fast
    visible: 'true'
  name: img4.x.0-multi-appsub
spec:
  releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.0-multi

To create the release image for OpenShift Container Platform images that do not support multiple architectures, complete steps similar to the following example for your architecture type:

  1. From the OpenShift Container Platform release registry, create a manifest list that includes x86_64, s390x, aarch64, and ppc64le release images.

    1. Pull the manifest lists for both architectures in your environment from the Quay repository by running the following example commands. Replace 4.x.1 with the current version of the product:

      podman pull quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64
      podman pull quay.io/openshift-release-dev/ocp-release:4.x.1-ppc64le
      podman pull quay.io/openshift-release-dev/ocp-release:4.x.1-s390x
      podman pull quay.io/openshift-release-dev/ocp-release:4.x.1-aarch64
    2. Log in to your private repository where you maintain your images by running the following command. Replace <private-repo> with the path to your repository:

      podman login <private-repo>
    3. Add the release image manifest to your private repository by running the following commands that apply to your environment. Replace 4.x.1 with the current version of the product. Replace <private-repo> with the path to your repository:

      podman push quay.io/openshift-release-dev/ocp-release:4.x.1-x86_64 <private-repo>/ocp-release:4.x.1-x86_64
      podman push quay.io/openshift-release-dev/ocp-release:4.x.1-ppc64le <private-repo>/ocp-release:4.x.1-ppc64le
      podman push quay.io/openshift-release-dev/ocp-release:4.x.1-s390x <private-repo>/ocp-release:4.x.1-s390x
      podman push quay.io/openshift-release-dev/ocp-release:4.x.1-aarch64 <private-repo>/ocp-release:4.x.1-aarch64
    4. Create a manifest for the new information by running the following command:

      podman manifest create mymanifest
    5. Add references to both release images to the manifest list by running the following commands. Replace 4.x.1 with the current version of the product. Replace <private-repo> with the path to your repository:

      podman manifest add mymanifest <private-repo>/ocp-release:4.x.1-x86_64
      podman manifest add mymanifest <private-repo>/ocp-release:4.x.1-ppc64le
      podman manifest add mymanifest <private-repo>/ocp-release:4.x.1-s390x
      podman manifest add mymanifest <private-repo>/ocp-release:4.x.1-aarch64
    6. Merge the list in your manifest list with the existing manifest by running the following command. Replace <private-repo> with the path to your repository. Replace 4.x.1 with the current version:

      podman manifest push mymanifest docker://<private-repo>/ocp-release:4.x.1
  2. On the hub cluster, create a release image that references the manifest in your repository.

    1. Create a YAML file that contains information that is similar to the following example. Replace <private-repo> with the path to your repository. Replace 4.x.1 with the current version:

      apiVersion: hive.openshift.io/v1
      kind: ClusterImageSet
      metadata:
        labels:
          channel: fast
          visible: "true"
        name: img4.x.1-appsub
      spec:
        releaseImage: <private-repo>/ocp-release:4.x.1
    2. Run the following command on your hub cluster to apply the changes. Replace <file-name> with the name of the YAML file that you created in the previous step:

      oc apply -f <file-name>.yaml
  3. Select the new release image when you create your OpenShift Container Platform cluster.
  4. If you deploy the managed cluster by using the Red Hat Advanced Cluster Management console, specify the architecture for the managed cluster in the Architecture field during the cluster creation process.

The creation process uses the merged release images to create the cluster.

1.7.2.1.4. Additional resources

1.7.2.2. Maintaining a custom list of release images when connected

You might want to use the same release image for all of your clusters. To simplify, you can create your own custom list of release images that are available when creating a cluster. Complete the following steps to manage your available release images:

  1. Fork the acm-hive-openshift-releases GitHub.
  2. Add the YAML files for the images that you want available when you create a cluster. Add the images to the ./clusterImageSets/stable/ or ./clusterImageSets/fast/ directory by using the Git console or the terminal.
  3. Create a ConfigMap in the multicluster-engine namespace named cluster-image-set-git-repo. See the following example, but replace 2.x with 2.7:
apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-image-set-git-repo
  namespace: multicluster-engine
data:
  gitRepoUrl: <forked acm-hive-openshift-releases repository URL>
  gitRepoBranch: backplane-<2.x>
  gitRepoPath: clusterImageSets
  channel: <fast or stable>

You can retrieve the available YAML files from the main repository by merging changes in to your forked repository with the following procedure:

  1. Commit and merge your changes to your forked repository.
  2. To synchronize your list of fast release images after you clone the acm-hive-openshift-releases repository, update the value of channel field in the cluster-image-set-git-repo ConfigMap to fast.
  3. To synchronize and display the stable release images, update the value of channel field in the cluster-image-set-git-repo ConfigMap to stable.

After updating the ConfigMap, the list of available stable release images updates with the currently available images in about one minute.

  1. You can use the following commands to list what is available and remove the defaults. Replace <clusterImageSet_NAME> with the correct name:

    oc get clusterImageSets
    oc delete clusterImageSet <clusterImageSet_NAME>

View the list of currently available release images in the console when you are creating a cluster.

For information regarding other fields available through the ConfigMap, view the cluster-image-set-controller GitHub repository README.

1.7.2.3. Maintaining a custom list of release images while disconnected

In some cases, you need to maintain a custom list of release images when the hub cluster has no Internet connection. You can create your own custom list of release images that are available when creating a cluster. Complete the following steps to manage your available release images while disconnected:

  1. When you are on a connected system, go to the acm-hive-openshift-releases GitHub repository to access the available cluster image sets.
  2. Copy the clusterImageSets directory to a system that can access the disconnected multicluster engine operator cluster.
  3. Add the mapping between the managed cluster and the disconnected repository with your cluster image sets by completing the following steps that fits your managed cluster:

  4. Add the YAML files for the images that you want available when you create a cluster by using the console or CLI to manually add the clusterImageSet YAML content.
  5. Modify the clusterImageSet YAML files for the remaining OpenShift Container Platform release images to reference the correct offline repository where you store the images. Your updates resemble the following example where spec.releaseImage uses your offline image registry of the release image, and the release image is referenced by digest:

    apiVersion: hive.openshift.io/v1
    kind: ClusterImageSet
    metadata:
      labels:
        channel: fast
      name: img<4.x.x>-x86-64-appsub
    spec:
      releaseImage: IMAGE_REGISTRY_IPADDRESS_or__DNSNAME/REPO_PATH/ocp-release@sha256:073a4e46289be25e2a05f5264c8f1d697410db66b960c9ceeddebd1c61e58717
  6. Ensure that the images are loaded in the offline image registry that is referenced in the YAML file.
  7. Obtain the image digest by running the following command:

    oc adm release info <tagged_openshift_release_image> | grep "Pull From"

    Replace <tagged_openshift_release_image> with the tagged image for the supported OpenShift Container Platform version. See the following example output:

    Pull From: quay.io/openshift-release-dev/ocp-release@sha256:69d1292f64a2b67227c5592c1a7d499c7d00376e498634ff8e1946bc9ccdddfe

    To learn more about the image tag and digest, see Referencing images in imagestreams.

  8. Create each of the clusterImageSets by entering the following command for each YAML file:

    oc create -f <clusterImageSet_FILE>

    Replace clusterImageSet_FILE with the name of the cluster image set file. For example:

    oc create -f img4.11.9-x86_64.yaml

    After running this command for each resource you want to add, the list of available release images are available.

  9. Alternately you can paste the image URL directly in the create cluster console. Adding the image URL creates new clusterImageSets if they do not exist.
  10. View the list of currently available release images in the console when you are creating a cluster.

1.7.3. Creating clusters

Learn how to create Red Hat OpenShift Container Platform clusters across cloud providers with multicluster engine operator.

multicluster engine operator uses the Hive operator that is provided with OpenShift Container Platform to provision clusters for all providers except the on-premises clusters and hosted control planes. When provisioning the on-premises clusters, multicluster engine operator uses the central infrastructure management and Assisted Installer function that are provided with OpenShift Container Platform. The hosted clusters for hosted control planes are provisioned by using the HyperShift operator.

1.7.3.1. Creating a cluster with the CLI

The multicluster engine for Kubernetes operator uses internal Hive components to create Red Hat OpenShift Container Platform clusters. See the following information to learn how to create clusters.

1.7.3.1.1. Prerequisites

Before creating a cluster, you must clone the clusterImageSets repository and apply it to your hub cluster. See the following steps:

  1. Run the following command to clone, but replace 2.x with 2.7:

    git clone https://github.com/stolostron/acm-hive-openshift-releases.git
    cd acm-hive-openshift-releases
    git checkout origin/backplane-<2.x>
  2. Run the following command to apply it to your hub cluster:

    find clusterImageSets/fast -type d -exec oc apply -f {} \; 2> /dev/null

Select the Red Hat OpenShift Container Platform release images when you create a cluster.

Note: If you use the Nutanix platform, be sure to use x86_64 architecture for the releaseImage in the ClusterImageSet resource and set the visible label value to 'true'. See the following example:

apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
  labels:
    channel: stable
    visible: 'true'
  name: img4.x.47-x86-64-appsub
spec:
  releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.47-x86_64
1.7.3.1.2. Create a cluster with ClusterDeployment

A ClusterDeployment is a Hive custom resource that is used to control the lifecycle of a cluster.

Follow the Using Hive documentation to create the ClusterDeployment custom resource and create an individual cluster.

1.7.3.1.3. Create a cluster with ClusterPool

A ClusterPool is also a Hive custom resource that is used to create multiple clusters.

Follow the Cluster Pools documentation to create a cluster with the Hive ClusterPool API.

1.7.3.2. Configuring additional manifests during cluster creation

You can configure additional Kubernetes resource manifests during the installation process of creating your cluster. This can help if you need to configure additional manifests for scenarios such as configuring networking or setting up a load balancer.

1.7.3.2.1. Prerequisite

Add a reference to the ClusterDeployment resource that specifies a config map resource that contains the additional resource manifests.

Note: The ClusterDeployment resource and the config map must be in the same namespace.

1.7.3.2.2. Configuring additional manifests during cluster creation by using examples

If you want to configure additional manifests by using a config map with resource manifests, complete the following steps:

  1. Create a YAML file and add the following example content:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: <my-baremetal-cluster-install-manifests>
      namespace: <mynamespace>
    data:
      99_metal3-config.yaml: |
        kind: ConfigMap
        apiVersion: v1
        metadata:
          name: metal3-config
          namespace: openshift-machine-api
        data:
          http_port: "6180"
          provisioning_interface: "enp1s0"
          provisioning_ip: "172.00.0.3/24"
          dhcp_range: "172.00.0.10,172.00.0.100"
          deploy_kernel_url: "http://172.00.0.3:6180/images/ironic-python-agent.kernel"
          deploy_ramdisk_url: "http://172.00.0.3:6180/images/ironic-python-agent.initramfs"
          ironic_endpoint: "http://172.00.0.3:6385/v1/"
          ironic_inspector_endpoint: "http://172.00.0.3:5150/v1/"
          cache_url: "http://192.168.111.1/images"
          rhcos_image_url: "https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.3/43.81.201911192044.0/x86_64/rhcos-43.81.201911192044.0-openstack.x86_64.qcow2.gz"

    Note: The example ConfigMap contains a manifest with another ConfigMap resource. The resource manifest ConfigMap can contain multiple keys with resource configurations added in the following pattern, data.<resource_name>\.yaml.

  2. Apply the file by running the following command:

    oc apply -f <filename>.yaml

    If you want to configure additional manifests by using a ClusterDeployment by referencing a resource manifest ConfigMap, complete the following steps:

  3. Create a YAML file and add the following example content. The resource manifest ConfigMap is referenced in spec.provisioning.manifestsConfigMapRef:

    apiVersion: hive.openshift.io/v1
    kind: ClusterDeployment
    metadata:
      name: <my-baremetal-cluster>
      namespace: <mynamespace>
      annotations:
        hive.openshift.io/try-install-once: "true"
    spec:
      baseDomain: test.example.com
      clusterName: <my-baremetal-cluster>
      controlPlaneConfig:
        servingCertificates: {}
      platform:
        baremetal:
          libvirtSSHPrivateKeySecretRef:
            name: provisioning-host-ssh-private-key
      provisioning:
        installConfigSecretRef:
          name: <my-baremetal-cluster-install-config>
        sshPrivateKeySecretRef:
          name: <my-baremetal-hosts-ssh-private-key>
        manifestsConfigMapRef:
          name: <my-baremetal-cluster-install-manifests>
        imageSetRef:
          name: <my-clusterimageset>
        sshKnownHosts:
        - "10.1.8.90 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXvVVVKUYVkuyvkuygkuyTCYTytfkufTYAAAAIbmlzdHAyNTYAAABBBKWjJRzeUVuZs4yxSy4eu45xiANFIIbwE3e1aPzGD58x/NX7Yf+S8eFKq4RrsfSaK2hVJyJjvVIhUsU9z2sBJP8="
      pullSecretRef:
        name: <my-baremetal-cluster-pull-secret>
  4. Apply the file by running the following command:

    oc apply -f <filename>.yaml

1.7.3.3. Creating a cluster on Amazon Web Services

You can use the multicluster engine operator console to create a Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS).

When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on AWS in the OpenShift Container Platform documentation for more information about the process.

1.7.3.3.1. Prerequisites

See the following prerequisites before creating a cluster on AWS:

  • You must have a deployed hub cluster.
  • You need an AWS credential. See Creating a credential for Amazon Web Services for more information.
  • You need a configured domain in AWS. See Configuring an AWS account for instructions on how to configure a domain.
  • You must have Amazon Web Services (AWS) login credentials, which include user name, password, access key ID, and secret access key. See Understanding and Getting Your Security Credentials.
  • You must have an OpenShift Container Platform image pull secret. See Using image pull secrets.

    Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the console. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.

1.7.3.3.2. Creating your AWS cluster

See the following important information about creating an AWS cluster:

  • When you review your information and optionally customize it before creating the cluster, you can select YAML: On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates.
  • When you create a cluster, the controller creates a namespace for the cluster and the resources. Ensure that you include only resources for that cluster instance in that namespace.
  • Destroying the cluster deletes the namespace and all of the resources in it.
  • If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions.
  • If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select.
  • Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet, it is automatically added to the default managed cluster set.
  • If there is already a base DNS domain that is associated with the selected credential that you configured with your AWS account, that value is populated in the field. You can change the value by overwriting it. This name is used in the hostname of the cluster.
  • The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. Select the image from the list of images that are available. If the image that you want to use is not available, you can enter the URL to the image that you want to use.
  • The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following fields:

    • Region: Specify the region where you want the node pool.
    • CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.
    • Zones: Specify where you want to run your control plane pools. You can select multiple zones within the region for a more distributed group of control plane nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
    • Instance type: Specify the instance type for your control plane node. You can change the type and size of your instance after it is created.
    • Root storage: Specify the amount of root storage to allocate for the cluster.
  • You can create zero or more worker nodes in a worker pool to run the container workloads for the cluster. This can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The optional information includes the following fields:

    • Zones: Specify where you want to run your worker pools. You can select multiple zones within the region for a more distributed group of nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
    • Instance type: Specify the instance type of your worker pools. You can change the type and size of your instance after it is created.
    • Node count: Specify the node count of your worker pool. This setting is required when you define a worker pool.
    • Root storage: Specify the amount of root storage allocated for your worker pool. This setting is required when you define a worker pool.
  • Networking details are required for your cluster, and multiple networks are required for using IPv6. You can add an additional network by clicking Add network.
  • Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:

    • HTTP proxy: Specify the URL that should be used as a proxy for HTTP traffic.
    • HTTPS proxy: Specify the secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
    • No proxy sites: A comma-separated list of sites that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
    • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.
1.7.3.3.3. Creating your cluster with the console

To create a new cluster, see the following procedure. If you have an existing cluster that you want to import instead, see Cluster import.

Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator.

  1. Navigate to Infrastructure > Clusters.
  2. On the Clusters page. Click Cluster > Create cluster and complete the steps in the console.
  3. Optional: Select YAML: On to view content updates as you enter the information in the console.

If you need to create a credential, see Creating a credential for Amazon Web Services for more information.

The name of the cluster is used in the hostname of the cluster.

If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.

1.7.3.3.4. Additional resources

1.7.3.4. Creating a cluster on Amazon Web Services GovCloud

You can use the console to create a Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS) or on AWS GovCloud. This procedure explains how to create a cluster on AWS GovCloud. See Creating a cluster on Amazon Web Services for the instructions for creating a cluster on AWS.

AWS GovCloud provides cloud services that meet additional requirements that are necessary to store government documents on the cloud. When you create a cluster on AWS GovCloud, you must complete additional steps to prepare your environment.

When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing a cluster on AWS into a government region in the OpenShift Container Platform documentation for more information about the process. The following sections provide the steps for creating a cluster on AWS GovCloud:

1.7.3.4.1. Prerequisites

You must have the following prerequisites before creating an AWS GovCloud cluster:

  • You must have AWS login credentials, which include user name, password, access key ID, and secret access key. See Understanding and Getting Your Security Credentials.
  • You need an AWS credential. See Creating a credential for Amazon Web Services for more information.
  • You need a configured domain in AWS. See Configuring an AWS account for instructions on how to configure a domain.
  • You must have an OpenShift Container Platform image pull secret. See Using image pull secrets.
  • You must have an Amazon Virtual Private Cloud (VPC) with an existing Red Hat OpenShift Container Platform cluster for the hub cluster. This VPC must be different from the VPCs that are used for the managed cluster resources or the managed cluster service endpoints.
  • You need a VPC where the managed cluster resources are deployed. This cannot be the same as the VPCs that are used for the hub cluster or the managed cluster service endpoints.
  • You need one or more VPCs that provide the managed cluster service endpoints. This cannot be the same as the VPCs that are used for the hub cluster or the managed cluster resources.
  • Ensure that the IP addresses of the VPCs that are specified by Classless Inter-Domain Routing (CIDR) do not overlap.
  • You need a HiveConfig custom resource that references a credential within the Hive namespace. This custom resource must have access to create resources on the VPC that you created for the managed cluster service endpoints.

Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the multicluster engine operator console. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.

1.7.3.4.2. Configure Hive to deploy on AWS GovCloud

While creating a cluster on AWS GovCloud is almost identical to creating a cluster on standard AWS, you have to complete some additional steps to prepare an AWS PrivateLink for the cluster on AWS GovCloud.

1.7.3.4.2.1. Create the VPCs for resources and endpoints

As listed in the prerequisites, two VPCs are required in addition to the VPC that contains the hub cluster. See Create a VPC in the Amazon Web Services documentation for specific steps for creating a VPC.

  1. Create a VPC for the managed cluster with private subnets.
  2. Create one or more VPCs for the managed cluster service endpoints with private subnets. Each VPC in a region has a limit of 255 VPC endpoints, so you need multiple VPCs to support more than 255 clusters in that region.
  3. For each VPC, create subnets in all of the supported availability zones of the region. Each subnet must have at least 255 usable IP addresses because of the controller requirements.

    The following example shows how you might structure subnets for VPCs that have 6 availability zones in the us-gov-east-1 region:

    vpc-1 (us-gov-east-1) : 10.0.0.0/20
      subnet-11 (us-gov-east-1a): 10.0.0.0/23
      subnet-12 (us-gov-east-1b): 10.0.2.0/23
      subnet-13 (us-gov-east-1c): 10.0.4.0/23
      subnet-12 (us-gov-east-1d): 10.0.8.0/23
      subnet-12 (us-gov-east-1e): 10.0.10.0/23
      subnet-12 (us-gov-east-1f): 10.0.12.0/2
    vpc-2 (us-gov-east-1) : 10.0.16.0/20
      subnet-21 (us-gov-east-1a): 10.0.16.0/23
      subnet-22 (us-gov-east-1b): 10.0.18.0/23
      subnet-23 (us-gov-east-1c): 10.0.20.0/23
      subnet-24 (us-gov-east-1d): 10.0.22.0/23
      subnet-25 (us-gov-east-1e): 10.0.24.0/23
      subnet-26 (us-gov-east-1f): 10.0.28.0/23
  4. Ensure that all of the hub environments (hub cluster VPCs) have network connectivity to the VPCs that you created for VPC endpoints that use peering, transit gateways, and that all DNS settings are enabled.
  5. Collect a list of VPCs that are needed to resolve the DNS setup for the AWS PrivateLink, which is required for the AWS GovCloud connectivity. This includes at least the VPC of the multicluster engine operator instance that you are configuring, and can include the list of all of the VPCs where various Hive controllers exist.
1.7.3.4.2.2. Configure the security groups for the VPC endpoints

Each VPC endpoint in AWS has a security group attached to control access to the endpoint. When Hive creates a VPC endpoint, it does not specify a security group. The default security group of the VPC is attached to the VPC endpoint. The default security group of the VPC must have rules to allow traffic where VPC endpoints are created from the Hive installer pods. See Control access to VPC endpoints using endpoint policies in the AWS documentation for details.

For example, if Hive is running in hive-vpc(10.1.0.0/16), there must be a rule in the default security group of the VPC where the VPC endpoint is created that allows ingress from 10.1.0.0/16.

1.7.3.4.3. Creating your cluster with the console

To create a cluster from the console, navigate to Infrastructure > Clusters > Create cluster AWS > Standalone and complete the steps in the console.

Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps.

The credential that you select must have access to the resources in an AWS GovCloud region, if you create an AWS GovCloud cluster. You can use an AWS GovCloud secret that is already in the Hive namespace if it has the required permissions to deploy a cluster. Existing credentials are displayed in the console. If you need to create a credential, see Creating a credential for Amazon Web Services for more information.

The name of the cluster is used in the hostname of the cluster.

Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.

Tip: Select YAML: On to view content updates as you enter the information in the console.

If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select.

Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet, it is automatically added to the default managed cluster set.

If there is already a base DNS domain that is associated with the selected credential that you configured with your AWS or AWS GovCloud account, that value is populated in the field. You can change the value by overwriting it. This name is used in the hostname of the cluster. See Configuring an AWS account for more information.

The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images.

The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following fields:

  • Region: The region where you create your cluster resources. If you are creating a cluster on an AWS GovCloud provider, you must include an AWS GovCloud region for your node pools. For example, us-gov-west-1.
  • CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.
  • Zones: Specify where you want to run your control plane pools. You can select multiple zones within the region for a more distributed group of control plane nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
  • Instance type: Specify the instance type for your control plane node, which must be the same as the CPU architecture that you previously indicated. You can change the type and size of your instance after it is created.
  • Root storage: Specify the amount of root storage to allocate for the cluster.

You can create zero or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The optional information includes the following fields:

  • Pool name: Provide a unique name for your pool.
  • Zones: Specify where you want to run your worker pools. You can select multiple zones within the region for a more distributed group of nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
  • Instance type: Specify the instance type of your worker pools. You can change the type and size of your instance after it is created.
  • Node count: Specify the node count of your worker pool. This setting is required when you define a worker pool.
  • Root storage: Specify the amount of root storage allocated for your worker pool. This setting is required when you define a worker pool.

Networking details are required for your cluster, and multiple networks are required for using IPv6. For an AWS GovCloud cluster, enter the values of the block of addresses of the Hive VPC in the Machine CIDR field. You can add an additional network by clicking Add network.

Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:

  • HTTP proxy URL: Specify the URL that should be used as a proxy for HTTP traffic.
  • HTTPS proxy URL: Specify the secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
  • No proxy domains: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
  • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.

When creating an AWS GovCloud cluster or using a private environment, complete the fields on the AWS private configuration page with the AMI ID and the subnet values. Ensure that the value of spec:platform:aws:privateLink:enabled is set to true in the ClusterDeployment.yaml file, which is automatically set when you select Use private configuration.

When you review your information and optionally customize it before creating the cluster, you can select YAML: On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates.

Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine for Kubernetes operator.

If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.

Continue with Accessing your cluster for instructions for accessing your cluster.

1.7.3.5. Creating a cluster on Microsoft Azure

You can use the multicluster engine operator console to deploy a Red Hat OpenShift Container Platform cluster on Microsoft Azure or on Microsoft Azure Government.

When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on Azure in the OpenShift Container Platform documentation for more information about the process.

1.7.3.5.1. Prerequisites

See the following prerequisites before creating a cluster on Azure:

Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.

1.7.3.5.2. Creating your cluster with the console

To create a cluster from the multicluster engine operator console, navigate to Infrastructure > Clusters. On the Clusters page, click Create cluster and complete the steps in the console.

Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps.

If you need to create a credential, see Creating a credential for Microsoft Azure for more information.

The name of the cluster is used in the hostname of the cluster.

Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.

Tip: Select YAML: On to view content updates as you enter the information in the console.

If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select.

Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet, it is automatically added to the default managed cluster set.

If there is already a base DNS domain that is associated with the selected credential that you configured for your Azure account, that value is populated in that field. You can change the value by overwriting it. See Configuring a custom domain name for an Azure cloud service for more information. This name is used in the hostname of the cluster.

The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images.

The Node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following optional fields:

  • Region: Specify a region where you want to run your node pools. You can select multiple zones within the region for a more distributed group of control plane nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
  • CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.

You can change the type and size of the Instance type and Root storage allocation (required) of your control plane pool after your cluster is created.

You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes the following fields:

  • Zones: Specifies here you want to run your worker pools. You can select multiple zones within the region for a more distributed group of nodes. A closer zone might provide faster performance, but a more distant zone might be more distributed.
  • Instance type: You can change the type and size of your instance after it is created.

You can add an additional network by clicking Add network. You must have more than one network if you are using IPv6 addresses.

Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:

  • HTTP proxy: The URL that should be used as a proxy for HTTP traffic.
  • HTTPS proxy: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
  • No proxy: A comma-separated list of domains that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
  • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.

When you review your information and optionally customize it before creating the cluster, you can click the YAML switch On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates.

If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.

Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator.

Continue with Accessing your cluster for instructions for accessing your cluster.

1.7.3.6. Creating a cluster on Google Cloud Platform

Follow the procedure to create a Red Hat OpenShift Container Platform cluster on Google Cloud Platform (GCP). For more information about GCP, see Google Cloud Platform.

When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on GCP in the OpenShift Container Platform documentation for more information about the process.

1.7.3.6.1. Prerequisites

See the following prerequisites before creating a cluster on GCP:

Note: If you change your cloud provider access key on the cloud provider, you also need to manually update the corresponding credential for the cloud provider on the console of multicluster engine operator. This is required when your credentials expire on the cloud provider where the managed cluster is hosted and you try to delete the managed cluster.

1.7.3.6.2. Creating your cluster with the console

To create clusters from the multicluster engine operator console, navigate to Infrastructure > Clusters. On the Clusters page, click Create cluster and complete the steps in the console.

Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps.

If you need to create a credential, see Creating a credential for Google Cloud Platform for more information.

The name of your cluster is used in the hostname of the cluster. There are some restrictions that apply to naming your GCP cluster. These restrictions include not beginning the name with goog or containing a group of letters and numbers that resemble google anywhere in the name. See Bucket naming guidelines for the complete list of restrictions.

Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.

Tip: Select YAML: On to view content updates as you enter the information in the console.

If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select.

Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet, it is automatically added to the default managed cluster set.

If there is already a base DNS domain that is associated with the selected credential for your GCP account, that value is populated in the field. You can change the value by overwriting it. See Setting up a custom domain for more information. This name is used in the hostname of the cluster.

The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images.

The Node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the following fields:

  • Region: Specify a region where you want to run your control plane pools. A closer region might provide faster performance, but a more distant region might be more distributed.
  • CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.

You can specify the instance type of your control plane pool. You can change the type and size of your instance after it is created.

You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes the following fields:

  • Instance type: You can change the type and size of your instance after it is created.
  • Node count: This setting is required when you define a worker pool.

The networking details are required, and multiple networks are required for using IPv6 addresses. You can add an additional network by clicking Add network.

Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:

  • HTTP proxy: The URL that should be used as a proxy for HTTP traffic.
  • HTTPS proxy: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
  • No proxy sites: A comma-separated list of sites that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
  • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.

When you review your information and optionally customize it before creating the cluster, you can select YAML: On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates.

If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.

Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator.

Continue with Accessing your cluster for instructions for accessing your cluster.

1.7.3.7. Creating a cluster on VMware vSphere

You can use the multicluster engine operator console to deploy a Red Hat OpenShift Container Platform cluster on VMware vSphere.

When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on vSphere in the OpenShift Container Platform documentation for more information about the process.

1.7.3.7.1. Prerequisites

See the following prerequisites before creating a cluster on vSphere:

  • You must have a hub cluster that is deployed on a supported OpenShift Container Platform version.
  • You need a vSphere credential. See Creating a credential for VMware vSphere for more information.
  • You need an OpenShift Container Platform image pull secret. See Using image pull secrets.
  • You must have the following information for the VMware instance where you are deploying:

    • Required static IP addresses for API and Ingress instances
    • DNS records for:

      • The following API base domain must point to the static API VIP:

        api.<cluster_name>.<base_domain>
      • The following application base domain must point to the static IP address for Ingress VIP:

        *.apps.<cluster_name>.<base_domain>
1.7.3.7.2. Creating your cluster with the console

To create a cluster from the multicluster engine operator console, navigate to Infrastructure > Clusters. On the Clusters page, click Create cluster and complete the steps in the console.

Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps.

If you need to create a credential, see Creating a credential for VMware vSphere for more information about creating a credential.

The name of your cluster is used in the hostname of the cluster.

Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.

Tip: Select YAML: On to view content updates as you enter the information in the console.

If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select.

Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet, it is automatically added to the default managed cluster set.

If there is already a base domain associated with the selected credential that you configured for your vSphere account, that value is populated in the field. You can change the value by overwriting it. See Installing a cluster on vSphere with customizations for more information. This value must match the name that you used to create the DNS records listed in the prerequisites section. This name is used in the hostname of the cluster.

The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images.

Note: Release images for OpenShift Container Platform versions 4.15 and later are supported.

The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. The information includes the CPU architecture field. View the following field description:

  • CPU architecture: If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.

You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes Cores per socket, CPUs, Memory_min MiB, _Disk size in GiB, and Node count.

Networking information is required. Multiple networks are required for using IPv6. Some of the required networking information is included the following fields:

  • vSphere network name: Specify the VMware vSphere network name.
  • API VIP: Specify the IP address to use for internal API communication.

    Note: This value must match the name that you used to create the DNS records listed in the prerequisites section. If not provided, the DNS must be pre-configured so that api. resolves correctly.

  • Ingress VIP: Specify the IP address to use for ingress traffic.

    Note: This value must match the name that you used to create the DNS records listed in the prerequisites section. If not provided, the DNS must be pre-configured so that test.apps. resolves correctly.

You can add an additional network by clicking Add network. You must have more than one network if you are using IPv6 addresses.

Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:

  • HTTP proxy: Specify the URL that should be used as a proxy for HTTP traffic.
  • HTTPS proxy: Specify the secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy URL is used for both HTTP and HTTPS.
  • No proxy sites: Provide a comma-separated list of sites that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
  • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.

You can define the disconnected installation image by clicking Disconnected installation. When creating a cluster by using Red Hat OpenStack Platform provider and disconnected installation, if a certificate is required to access the mirror registry, you must enter it in the Additional trust bundle field in the Configuration for disconnected installation section when configuring your credential or the Disconnected installation section when creating a cluster.

You can click Add automation template to create a template.

When you review your information and optionally customize it before creating the cluster, you can click the YAML switch On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates.

If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.

Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator.

Continue with Accessing your cluster for instructions for accessing your cluster.

1.7.3.8. Creating a cluster on Red Hat OpenStack Platform

You can use the multicluster engine operator console to deploy a Red Hat OpenShift Container Platform cluster on Red Hat OpenStack Platform.

When you create a cluster, the creation process uses the OpenShift Container Platform installer with the Hive resource. If you have questions about cluster creation after completing this procedure, see Installing on OpenStack in the OpenShift Container Platform documentation for more information about the process.

1.7.3.8.1. Prerequisites

See the following prerequisites before creating a cluster on Red Hat OpenStack Platform:

  • You must have a hub cluster that is deployed on OpenShift Container Platform version 4.6 or later.
  • You must have a Red Hat OpenStack Platform credential. See Creating a credential for Red Hat OpenStack Platform for more information.
  • You need an OpenShift Container Platform image pull secret. See Using image pull secrets.
  • You need the following information for the Red Hat OpenStack Platform instance where you are deploying:

    • Flavor name for the control plane and worker instances; for example, m1.xlarge
    • Network name for the external network to provide the floating IP addresses
    • Required floating IP addresses for API and ingress instances
    • DNS records for:

      • The following API base domain must point to the floating IP address for the API:

        api.<cluster_name>.<base_domain>
      • The following application base domain must point to the floating IP address for ingress:app-name:

        *.apps.<cluster_name>.<base_domain>
1.7.3.8.2. Creating your cluster with the console

To create a cluster from the multicluster engine operator console, navigate to Infrastructure > Clusters. On the Clusters page, click Create cluster and complete the steps in the console.

Note: This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Cluster import for those steps.

If you need to create a credential, see Creating a credential for Red Hat OpenStack Platform for more information.

The name of the cluster is used in the hostname of the cluster. The name must contain fewer than 15 characters. This value must match the name that you used to create the DNS records listed in the credential prerequisites section.

Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.

Tip: Select YAML: On to view content updates as you enter the information in the console.

If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select.

Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet, it is automatically added to the default managed cluster set.

If there is already a base DNS domain that is associated with the selected credential that you configured for your Red Hat OpenStack Platform account, that value is populated in the field. You can change the value by overwriting it. See Managing domains in the Red Hat OpenStack Platform documentation for more information. This name is used in the hostname of the cluster.

The release image identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images for more information about release images. Only release images for OpenShift Container Platform versions 4.6.x and higher are supported.

The node pools include the control plane pool and the worker pools. The control plane nodes share the management of the cluster activity. If the architecture type of the managed cluster is not the same as the architecture of your hub cluster, enter a value for the instruction set architecture of the machines in the pool. Valid values are amd64, ppc64le, s390x, and arm64.

You must add an instance type for your control plane pool, but you can change the type and size of your instance after it is created.

You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools. If zero worker nodes are specified, the control plane nodes also function as worker nodes. The information includes the following fields:

  • Instance type: You can change the type and size of your instance after it is created.
  • Node count: Specify the node count for your worker pool. This setting is required when you define a worker pool.

Networking details are required for your cluster. You must provide the values for one or more networks for an IPv4 network. For an IPv6 network, you must define more than one network.

You can add an additional network by clicking Add network. You must have more than one network if you are using IPv6 addresses.

Proxy information that is provided in the credential is automatically added to the proxy fields. You can use the information as it is, overwrite it, or add the information if you want to enable a proxy. The following list contains the required information for creating a proxy:

  • HTTP proxy: Specify the URL that should be used as a proxy for HTTP traffic.
  • HTTPS proxy: The secure proxy URL that should be used for HTTPS traffic. If no value is provided, the same value as the HTTP Proxy is used for both HTTP and HTTPS.
  • No proxy: Define a comma-separated list of sites that should bypass the proxy. Begin a domain name with a period . to include all of the subdomains that are in that domain. Add an asterisk * to bypass the proxy for all destinations.
  • Additional trust bundle: One or more additional CA certificates that are required for proxying HTTPS connections.

You can define the disconnected installation image by clicking Disconnected installation. When creating a cluster by using Red Hat OpenStack Platform provider and disconnected installation, if a certificate is required to access the mirror registry, you must enter it in the Additional trust bundle field in the Configuration for disconnected installation section when configuring your credential or the Disconnected installation section when creating a cluster.

When you review your information and optionally customize it before creating the cluster, you can click the YAML switch On to view the install-config.yaml file content in the panel. You can edit the YAML file with your custom settings, if you have any updates.

When creating a cluster that uses an internal certificate authority (CA), you need to customize the YAML file for your cluster by completing the following steps:

  1. With the YAML switch on at the review step, insert a Secret object at the top of the list with the CA certificate bundle. Note: If the Red Hat OpenStack Platform environment provides services using certificates signed by multiple authorities, the bundle must include the certificates to validate all of the required endpoints. The addition for a cluster named ocp3 resembles the following example:

    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: ocp3-openstack-trust
      namespace: ocp3
    stringData:
      ca.crt: |
        -----BEGIN CERTIFICATE-----
        <Base64 certificate contents here>
        -----END CERTIFICATE-----
        -----BEGIN CERTIFICATE-----
        <Base64 certificate contents here>
        -----END CERTIFICATE----
  2. Modify the Hive ClusterDeployment object to specify the value of certificatesSecretRef in spec.platform.openstack, similar to the following example:

    platform:
      openstack:
        certificatesSecretRef:
          name: ocp3-openstack-trust
        credentialsSecretRef:
          name: ocp3-openstack-creds
        cloud: openstack

    The previous example assumes that the cloud name in the clouds.yaml file is openstack.

If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.

Note: You do not have to run the oc command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of multicluster engine operator.

Continue with Accessing your cluster for instructions for accessing your cluster.

1.7.3.9. Creating a cluster in an on-premises environment

You can use the console to create on-premises Red Hat OpenShift Container Platform clusters. The clusters can be single-node OpenShift clusters, multi-node clusters, and compact three-node clusters on VMware vSphere, Red Hat OpenStack, Nutanix, or in a bare metal environment.

There is no platform integration with the platform where you install the cluster, as the platform value is set to platform=none. A single-node OpenShift cluster contains only a single node, which hosts the control plane services and the user workloads. This configuration can be helpful when you want to minimize the resource footprint of the cluster.

You can also provision multiple single-node OpenShift clusters on edge resources by using the zero touch provisioning feature, which is a feature that is available with Red Hat OpenShift Container Platform. For more information about zero touch provisioning, see Clusters at the network far edge in the OpenShift Container Platform documentation.

1.7.3.9.1. Prerequisites

See the following prerequisites before creating a cluster in an on-premises environment:

  • You must have a deployed hub cluster on a supported OpenShift Container Platform version.
  • You need a configured infrastructure environment with a host inventory of configured hosts.
  • You must have internet access for your hub cluster (connected), or a connection to an internal or mirror registry that has a connection to the internet (disconnected) to retrieve the required images for creating the cluster.
  • You need a configured on-premises credential.
  • You need an OpenShift Container Platform image pull secret. See Using image pull secrets.
  • You need the following DNS records:

    • The following API base domain must point to the static API VIP:

      api.<cluster_name>.<base_domain>
    • The following application base domain must point to the static IP address for Ingress VIP:

      *.apps.<cluster_name>.<base_domain>
1.7.3.9.2. Creating your cluster with the console

To create a cluster from the console, complete the following steps:

  1. Navigate to Infrastructure > Clusters.
  2. On the Clusters page, click Create cluster and complete the steps in the console.
  3. Select Host inventory as the type of cluster.

The following options are available for your assisted installation:

  • Use existing discovered hosts: Select your hosts from a list of hosts that are in an existing host inventory.
  • Discover new hosts: Discover hosts that are not already in an existing infrastructure environment. Discover your own hosts, rather than using one that is already in an infrastructure environment.

If you need to create a credential, see Creating a credential for an on-premises environment for more information.

The name for your cluster is used in the hostname of the cluster.

Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.

Note: Select YAML: On to view content updates as you enter the information in the console.

If you want to add your cluster to an existing cluster set, you must have the correct permissions on the cluster set to add it. If you do not have cluster-admin privileges when you are creating the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster creation fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have any cluster set options to select.

Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet, it is automatically added to the default managed cluster set.

If there is already a base DNS domain that is associated with the selected credential that you configured for your provider account, that value is populated in that field. You can change the value by overwriting it, but this setting cannot be changed after the cluster is created. The base domain of your provider is used to create routes to your Red Hat OpenShift Container Platform cluster components. It is configured in the DNS of your cluster provider as a Start of Authority (SOA) record.

The OpenShift version identifies the version of the OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the URL to the image that you want to use. See Release images to learn more.

When you select a supported OpenShift Container Platform version, an option to select Install single-node OpenShift is displayed. A single-node OpenShift cluster contains a single node which hosts the control plane services and the user workloads. See Scaling hosts to an infrastructure environment to learn more about adding nodes to a single-node OpenShift cluster after it is created.

If you want your cluster to be a single-node OpenShift cluster, select the single-node OpenShift option. You can add additional workers to single-node OpenShift clusters by completing the following steps:

  1. From the console, navigate to Infrastructure > Clusters and select the name of the cluster that you created or want to access.
  2. Select Actions > Add hosts to add additional workers.

Note: The single-node OpenShift control plane requires 8 CPU cores, while a control plane node for a multinode control plane cluster only requires 4 CPU cores.

After you review and save the cluster, your cluster is saved as a draft cluster. You can close the creation process and finish the process later by selecting the cluster name on the Clusters page.

If you are using existing hosts, select whether you want to select the hosts yourself, or if you want them to be selected automatically. The number of hosts is based on the number of nodes that you selected. For example, a single-node OpenShift cluster only requires one host, while a standard three-node cluster requires three hosts.

The locations of the available hosts that meet the requirements for this cluster are displayed in the list of Host locations. For distribution of the hosts and a more high-availability configuration, select multiple locations.

If you are discovering new hosts with no existing infrastructure environment, complete the steps in Adding hosts to the host inventory by using the Discovery Image.

After the hosts are bound, and the validations pass, complete the networking information for your cluster by adding the following IP addresses:

  • API VIP: Specifies the IP address to use for internal API communication.

    Note: This value must match the name that you used to create the DNS records listed in the prerequisites section. If not provided, the DNS must be pre-configured so that api. resolves correctly.

  • Ingress VIP: Specifies the IP address to use for ingress traffic.

    Note: This value must match the name that you used to create the DNS records listed in the prerequisites section. If not provided, the DNS must be pre-configured so that test.apps. resolves correctly.

If you are using Red Hat Advanced Cluster Management for Kubernetes and want to configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes for the required steps.

You can view the status of the installation on the Clusters navigation page.

Continue with Accessing your cluster for instructions for accessing your cluster.

1.7.3.9.3. Creating your cluster with the command line

You can also create a cluster without the console by using the Assisted Installer feature within the central infrastructure management component. After you complete this procedure, you can boot the host from the discovery image that is generated. The order of the procedures is generally not important, but is noted when there is a required order.

1.7.3.9.3.1. Create the namespace

You need a namespace for your resources. It is more convenient to keep all of the resources in a shared namespace. This example uses sample-namespace for the name of the namespace, but you can use any name except assisted-installer. Create a namespace by creating and applying the following file:

apiVersion: v1
kind: Namespace
metadata:
  name: sample-namespace
1.7.3.9.3.2. Add the pull secret to the namespace

Add your pull secret to your namespace by creating and applying the following custom resource:

apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
  name: <pull-secret>
  namespace: sample-namespace
stringData:
  .dockerconfigjson: 'your-pull-secret-json' 1
1 1
Add the content of the pull secret. For example, this can include a cloud.openshift.com, quay.io, or registry.redhat.io authentication.
1.7.3.9.3.3. Generate a ClusterImageSet

Generate a CustomImageSet to specify the version of OpenShift Container Platform for your cluster by creating and applying the following custom resource:

apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
  name: openshift-v4.15.0
spec:
  releaseImage: quay.io/openshift-release-dev/ocp-release:4.15.0-rc.0-x86_64
1.7.3.9.3.4. Create the ClusterDeployment custom resource

The ClusterDeployment custom resource definition is an API that controls the lifecycle of the cluster. It references the AgentClusterInstall custom resource in the spec.ClusterInstallRef setting which defines the cluster parameters.

Create and apply a ClusterDeployment custom resource based on the following example:

apiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
  name: single-node
  namespace: demo-worker4
spec:
  baseDomain: hive.example.com
  clusterInstallRef:
    group: extensions.hive.openshift.io
    kind: AgentClusterInstall
    name: test-agent-cluster-install 1
    version: v1beta1
  clusterName: test-cluster
  controlPlaneConfig:
    servingCertificates: {}
  platform:
    agentBareMetal:
      agentSelector:
        matchLabels:
          location: internal
  pullSecretRef:
    name: <pull-secret> 2
1
Use the name of your AgentClusterInstall resource.
2
Use the pull secret that you downloaded in Add the pull secret to the namespace.
1.7.3.9.3.5. Create the AgentClusterInstall custom resource

In the AgentClusterInstall custom resource, you can specify many of the requirements for the clusters. For example, you can specify the cluster network settings, platform, number of control planes, and worker nodes.

Create and add the a custom resource that resembles the following example:

apiVersion: extensions.hive.openshift.io/v1beta1
kind: AgentClusterInstall
metadata:
  name: test-agent-cluster-install
  namespace: demo-worker4
spec:
  platformType: BareMetal 1
  clusterDeploymentRef:
    name: single-node 2
  imageSetRef:
    name: openshift-v4.15.0 3
  networking:
    clusterNetwork:
    - cidr: 10.128.0.0/14
      hostPrefix: 23
    machineNetwork:
    - cidr: 192.168.111.0/24
    serviceNetwork:
    - 172.30.0.0/16
  provisionRequirements:
    controlPlaneAgents: 1
  sshPublicKey: ssh-rsa <your-public-key-here> 4
1
Specify the platform type of the environment where the cluster is created. Valid values are: BareMetal, None, VSphere, Nutanix, or External.
2
Use the same name that you used for your ClusterDeployment resource.
3
Use the ClusterImageSet that you generated in Generate a ClusterImageSet.
4
You can specify your SSH public key, which enables you to access the host after it is installed.
1.7.3.9.3.6. Optional: Create the NMStateConfig custom resource

The NMStateConfig custom resource is only required if you have a host-level network configuration, such as static IP addresses. If you include this custom resource, you must complete this step before creating an InfraEnv custom resource. The NMStateConfig is referred to by the values for spec.nmStateConfigLabelSelector in the InfraEnv custom resource.

Create and apply your NMStateConfig custom resource, which resembles the following example. Replace values where needed:

apiVersion: agent-install.openshift.io/v1beta1
kind: NMStateConfig
metadata:
  name: <mynmstateconfig>
  namespace: <demo-worker4>
  labels:
    demo-nmstate-label: <value>
spec:
  config:
    interfaces:
      - name: eth0
        type: ethernet
        state: up
        mac-address: 02:00:00:80:12:14
        ipv4:
          enabled: true
          address:
            - ip: 192.168.111.30
              prefix-length: 24
          dhcp: false
      - name: eth1
        type: ethernet
        state: up
        mac-address: 02:00:00:80:12:15
        ipv4:
          enabled: true
          address:
            - ip: 192.168.140.30
              prefix-length: 24
          dhcp: false
    dns-resolver:
      config:
        server:
          - 192.168.126.1
    routes:
      config:
        - destination: 0.0.0.0/0
          next-hop-address: 192.168.111.1
          next-hop-interface: eth1
          table-id: 254
        - destination: 0.0.0.0/0
          next-hop-address: 192.168.140.1
          next-hop-interface: eth1
          table-id: 254
  interfaces:
    - name: "eth0"
      macAddress: "02:00:00:80:12:14"
    - name: "eth1"
      macAddress: "02:00:00:80:12:15"

Note: You must include the demo-nmstate-label label name and value in the InfraEnv resource spec.nmStateConfigLabelSelector.matchLabels field.

1.7.3.9.3.7. Create the InfraEnv custom resource

The InfraEnv custom resource provides the configuration to create the discovery ISO. Within this custom resource, you identify values for proxy settings, ignition overrides, and specify NMState labels. The value of spec.nmStateConfigLabelSelector in this custom resource references the NMStateConfig custom resource.

Note: If you plan to include the optional NMStateConfig custom resource, you must reference it in the InfraEnv custom resource. If you create the InfraEnv custom resource before you create the NMStateConfig custom resource edit the InfraEnv custom resource to reference the NMStateConfig custom resource and download the ISO after the reference is added.

Create and apply the following custom resource:

apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
  name: myinfraenv
  namespace: demo-worker4
spec:
  clusterRef:
    name: single-node  1
    namespace: demo-worker4 2
  pullSecretRef:
    name: pull-secret
  sshAuthorizedKey: <your_public_key_here>
  nmStateConfigLabelSelector:
    matchLabels:
      demo-nmstate-label: value
  proxy:
    httpProxy: http://USERNAME:PASSWORD@proxy.example.com:PORT
    httpsProxy: https://USERNAME:PASSWORD@proxy.example.com:PORT
    noProxy: .example.com,172.22.0.0/24,10.10.0.0/24
1
Replace the clusterDeployment resource name from Create the ClusterDeployment.
2
Replace the clusterDeployment resource namespace from Create the ClusterDeployment.
1.7.3.9.3.7.1. InfraEnv field table
FieldOptional or requiredDescription

sshAuthorizedKey

Optional

You can specify your SSH public key, which enables you to access the host when it is booted from the discovery ISO image.

nmStateConfigLabelSelector

Optional

Consolidates advanced network configuration such as static IPs, bridges, and bonds for the hosts. The host network configuration is specified in one or more NMStateConfig resources with labels you choose. The nmStateConfigLabelSelector property is a Kubernetes label selector that matches your chosen labels. The network configuration for all NMStateConfig labels that match this label selector is included in the Discovery Image. When you boot, each host compares each configuration to its network interfaces and applies the appropriate configuration.

proxy

Optional

You can specify proxy settings required by the host during discovery in the proxy section.

Note: When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately.

1.7.3.9.3.8. Boot the host from the discovery image

The remaining steps explain how to boot the host from the discovery ISO image that results from the previous procedures.

  1. Download the discovery image from the namespace by running the following command:

    curl --insecure -o image.iso $(kubectl -n sample-namespace get infraenvs.agent-install.openshift.io myinfraenv -o=jsonpath="{.status.isoDownloadURL}")
  2. Move the discovery image to virtual media, a USB drive, or another storage location and boot the host from the discovery image that you downloaded.
  3. The Agent resource is created automatically. It is registered to the cluster and represents a host that booted from a discovery image. Approve the Agent custom resource and start the installation by running the following command:

    oc -n sample-namespace patch agents.agent-install.openshift.io 07e80ea9-200c-4f82-aff4-4932acb773d4 -p '{"spec":{"approved":true}}' --type merge

    Replace the agent name and UUID with your values.

    You can confirm that it was approved when the output of the previous command includes an entry for the target cluster that includes a value of true for the APPROVED parameter.

1.7.3.9.4. Additional resources

1.7.3.10. Creating a cluster in a proxy environment

You can create a Red Hat OpenShift Container Platform cluster when your hub cluster is connected through a proxy server. One of the following situations must be true for the cluster creation to succeed:

  • multicluster engine operator has a private network connection with the managed cluster that you are creating, with managed cluster access to the Internet by using a proxy.
  • The managed cluster is on an infrastructure provider, but the firewall ports enable communication from the managed cluster to the hub cluster.

To create a cluster that is configured with a proxy, complete the following steps:

  1. Configure the cluster-wide-proxy setting on the hub cluster by adding the following information to your install-config YAML that is stored in your Secret:

    apiVersion: v1
    kind: Proxy
    baseDomain: <domain>
    proxy:
      httpProxy: http://<username>:<password>@<proxy.example.com>:<port>
      httpsProxy: https://<username>:<password>@<proxy.example.com>:<port>
      noProxy: <wildcard-of-domain>,<provisioning-network/CIDR>,<BMC-address-range/CIDR>

    Replace username with the username for your proxy server.

    Replace password with the password to access your proxy server.

    Replace proxy.example.com with the path of your proxy server.

    Replace port with the communication port with the proxy server.

    Replace wildcard-of-domain with an entry for domains that should bypass the proxy.

    Replace provisioning-network/CIDR with the IP address of the provisioning network and the number of assigned IP addresses, in CIDR notation.

    Replace BMC-address-range/CIDR with the BMC address and the number of addresses, in CIDR notation.

  2. Provision the cluster by completing the procedure for creating a cluster. See Creating a cluster to select your provider.

Note: You can only use install-config YAML when deploying your cluster. After deploying your cluster, any new changes you make to install-config YAML do not apply. To update the configuration after deployment, you must use policies. See Pod policy for more information.

1.7.3.10.1. Additional resources
  • See Creating clusters to select your provider.
  • See Pod policy to learn how to make configuration changes after deploying your cluster.

1.7.3.11. Configuring AgentClusterInstall proxy

The AgentClusterInstall proxy fields determine the proxy settings during installation, and are used to create the cluster-wide proxy resource in the created cluster.

1.7.3.11.1. Configuring AgentClusterInstall

To configure the AgentClusterInstall proxy, add the proxy settings to the AgentClusterInstall resource. See the following YAML sample with httpProxy, httpsProxy, and noProxy:

apiVersion: extensions.hive.openshift.io/v1beta1
kind: AgentClusterInstall
spec:
  proxy:
    httpProxy: http://<username>:<password>@<proxy.example.com>:<port> 1
    httpsProxy: https://<username>:<password>@<proxy.example.com>:<port> 2
    noProxy: <wildcard-of-domain>,<provisioning-network/CIDR>,<BMC-address-range/CIDR> 3
1
httpProxy is the URL of the proxy for HTTP requests. Replace the username and password values with your credentials for your proxy server. Replace proxy.example.com with the path of your proxy server.
2
httpsProxy is the URL of the proxy for HTTPS requests. Replace the values with your credentials. Replace port with the communication port with the proxy server.
3
noProxy is a comma-separated list of domains and CIDRs for which the proxy should not be used. Replace wildcard-of-domain with an entry for domains that should bypass the proxy. Replace provisioning-network/CIDR with the IP address of the provisioning network and the number of assigned IP addresses, in CIDR notation. Replace BMC-address-range/CIDR with the BMC address and the number of addresses, in CIDR notation.
1.7.3.11.2. Additional resources

1.7.4. Cluster import

You can import clusters from different Kubernetes cloud providers. After you import, the target cluster becomes a managed cluster for the multicluster engine operator hub cluster. You can generally complete the import tasks anywhere that you can access the hub cluster and the target managed cluster, unless otherwise specified.

  • A hub cluster cannot manage any other hub cluster, but can manage itself. The hub cluster is configured to automatically be imported and self-managed. You do not need to manually import the hub cluster.
  • If you remove a hub cluster and try to import it again, you must add the local-cluster:true label to the ManagedCluster resource.

Important: Cluster lifecycle now supports all providers that are certified through the Cloud Native Computing Foundation (CNCF) Kubernetes Conformance Program. Choose a vendor that is recognized by CNFC for your hybrid cloud multicluster management.

See the following information about using CNFC providers:

Read the following topics to learn more about importing a cluster so that you can manage it:

Required user type or access level: Cluster administrator

1.7.4.1. Importing a managed cluster by using the console

After you install multicluster engine for Kubernetes operator, you are ready to import a cluster to manage. Continue reading the following topics learn how to import a managed cluster by using the console:

1.7.4.1.1. Prerequisites
  • A deployed hub cluster. If you are importing bare metal clusters, the hub cluster must be installed on a supported Red Hat OpenShift Container Platform version.
  • A cluster you want to manage.
  • The base64 command line tool.
  • A defined multiclusterhub.spec.imagePullSecret if you are importing a cluster that was not created by OpenShift Container Platform. This secret might have been created when multicluster engine for Kubernetes operator was installed. See Custom image pull secret for more information about how to define this secret.

Required user type or access level: Cluster administrator

1.7.4.1.2. Creating a new pull secret

If you need to create a new pull secret, complete the following steps:

  1. Download your Kubernetes pull secret from cloud.redhat.com.
  2. Add the pull secret to the namespace of your hub cluster.
  3. Run the following command to create a new secret in the open-cluster-management namespace:

    oc create secret generic pull-secret -n <open-cluster-management> --from-file=.dockerconfigjson=<path-to-pull-secret> --type=kubernetes.io/dockerconfigjson

    Replace open-cluster-management with the name of the namespace of your hub cluster. The default namespace of the hub cluster is open-cluster-management.

    Replace path-to-pull-secret with the path to the pull secret that you downloaded.

    The secret is automatically copied to the managed cluster when it is imported.

    • Ensure that a previously installed agent is deleted from the cluster that you want to import. You must remove the open-cluster-management-agent and open-cluster-management-agent-addon namespaces to avoid errors.
    • For importing in a Red Hat OpenShift Dedicated environment, see the following notes:

      • You must have the hub cluster deployed in a Red Hat OpenShift Dedicated environment.
      • The default permission in Red Hat OpenShift Dedicated is dedicated-admin, but that does not contain all of the permissions to create a namespace. You must have cluster-admin permissions to import and manage a cluster with multicluster engine operator.
1.7.4.1.3. Importing a cluster

You can import existing clusters from the console for each of the available cloud providers.

Note: A hub cluster cannot manage a different hub cluster. A hub cluster is set up to automatically import and manage itself, so you do not have to manually import a hub cluster to manage itself.

By default, the namespace is used for the cluster name and namespace, but you can change it.

Important: When you create a cluster, the controller creates a namespace for the cluster and its resources. Ensure that you include only resources for that cluster instance in that namespace. Destroying the cluster deletes the namespace and all of the resources in it.

Every managed cluster must be associated with a managed cluster set. If you do not assign the managed cluster to a ManagedClusterSet, the cluster is automatically added to the default managed cluster set.

If you want to add the cluster to a different cluster set, you must have clusterset-admin privileges to the cluster set. If you do not have cluster-admin privileges when you are importing the cluster, you must select a cluster set on which you have clusterset-admin permissions. If you do not have the correct permissions on the specified cluster set, the cluster importing fails. Contact your cluster administrator to provide you with clusterset-admin permissions to a cluster set if you do not have cluster set options to select.

If you import a OpenShift Container Platform Dedicated cluster and do not specify a vendor by adding a label for vendor=OpenShiftDedicated, or if you add a label for vendor=auto-detect, a managed-by=platform label is automatically added to the cluster. You can use this added label to identify the cluster as a OpenShift Container Platform Dedicated cluster and retrieve the OpenShift Container Platform Dedicated clusters as a group.

The following table provides the available options for import mode, which specifies the method for importing the cluster:

Run import commands manually

After completing and submitting the information in the console, including any Red Hat Ansible Automation Platform templates, run the provided command on the target cluster to import the cluster.

Enter your server URL and API token for the existing cluster

Provide the server URL and API token of the cluster that you are importing. You can specify a Red Hat Ansible Automation Platform template to run when the cluster is upgraded.

Provide the kubeconfig file

Copy and paste the contents of the kubeconfig file of the cluster that you are importing. You can specify a Red Hat Ansible Automation Platform template to run when the cluster is upgraded.

Note: You must have the Red Hat Ansible Automation Platform Resource Operator installed from OperatorHub to create and run an Ansible Automation Platform job.

To configure a cluster API address, see Optional: Configuring the cluster API address.

To configure your managed cluster klusterlet to run on specific nodes, see Optional: Configuring the klusterlet to run on specific nodes.

1.7.4.1.3.1. Optional: Configuring the cluster API address

Complete the following steps to optionally configure the Cluster API address that is on the cluster details page by configuring the URL that is displayed in the table when you run the oc get managedcluster command:

  1. Log in to your hub cluster with an ID that has cluster-admin permissions.
  2. Configure a kubeconfig file for your targeted managed cluster.
  3. Edit the managed cluster entry for the cluster that you are importing by running the following command, replacing cluster-name with the name of the managed cluster:

    oc edit managedcluster <cluster-name>
  4. Add the ManagedClusterClientConfigs section to the ManagedCluster spec in the YAML file, as shown in the following example:

    spec:
      hubAcceptsClient: true
      managedClusterClientConfigs:
      - url: <https://api.new-managed.dev.redhat.com> 1
    1
    Replace the value of the URL with the URL that provides external access to the managed cluster that you are importing.
1.7.4.1.3.2. Optional: Configuring the klusterlet to run on specific nodes

You can specify which nodes you want the managed cluster klusterlet to run on by configuring the nodeSelector and tolerations annotation for the managed cluster. Complete the following steps to configure these settings:

  1. Select the managed cluster that you want to update from the clusters page in the console.
  2. Set the YAML switch to On to view the YAML content.

    Note: The YAML editor is only available when importing or creating a cluster. To edit the managed cluster YAML definition after importing or creating, you must use the OpenShift Container Platform command-line interface or the Red Hat Advanced Cluster Management search feature.

  3. Add the nodeSelector annotation to the managed cluster YAML definition. The key for this annotation is: open-cluster-management/nodeSelector. The value of this annotation is a string map with JSON formatting.
  4. Add the tolerations entry to the managed cluster YAML definition. The key of this annotation is: open-cluster-management/tolerations. The value of this annotation represents a toleration list with JSON formatting. The resulting YAML might resemble the following example:

    apiVersion: cluster.open-cluster-management.io/v1
    kind: ManagedCluster
    metadata:
      annotations:
        open-cluster-management/nodeSelector: '{"dedicated":"acm"}'
        open-cluster-management/tolerations: '[{"key":"dedicated","operator":"Equal","value":"acm","effect":"NoSchedule"}]'

You can also use a KlusterletConfig to configure the nodeSelector and tolerations for the managed cluster. Complete the following steps to configure these settings:

Note: If you use a KlusterletConfig, the managed cluster uses the configuration in the KlusterletConfig settings instead of the settings in the managed cluster annotation.

  1. Apply the following sample YAML content. Replace value where needed:

    apiVersion: config.open-cluster-management.io/v1alpha1
    kind: KlusterletConfig
    metadata:
      name: <klusterletconfigName>
    spec:
      nodePlacement:
        nodeSelector:
          dedicated: acm
        tolerations:
          - key: dedicated
            operator: Equal
            value: acm
            effect: NoSchedule
  2. Add the agent.open-cluster-management.io/klusterlet-config: `<klusterletconfigName> annotation to the managed cluster, replacing <klusterletconfigName> with the name of your KlusterletConfig.
1.7.4.1.4. Removing an imported cluster

Complete the following procedure to remove an imported cluster and the open-cluster-management-agent-addon that was created on the managed cluster.

On the Clusters page, click Actions > Detach cluster to remove your cluster from management.

Note: If you attempt to detach the hub cluster, which is named local-cluster, be aware that the default setting of disableHubSelfManagement is false. This setting causes the hub cluster to reimport itself and manage itself when it is detached and it reconciles the MultiClusterHub controller. It might take hours for the hub cluster to complete the detachment process and reimport. If you want to reimport the hub cluster without waiting for the processes to finish, you can run the following command to restart the multiclusterhub-operator pod and reimport faster:

oc delete po -n open-cluster-management `oc get pod -n open-cluster-management | grep multiclusterhub-operator| cut -d' ' -f1`

You can change the value of the hub cluster to not import automatically by changing the disableHubSelfManagement value to true. For more information, see the disableHubSelfManagement topic.

1.7.4.1.4.1. Additional resources

1.7.4.2. Importing a managed cluster by using the CLI

After you install multicluster engine for Kubernetes operator, you are ready to import a cluster and manage it by using the Red Hat OpenShift Container Platform CLI. Continue reading the following topics to learn how to import a managed cluster with the CLI by using the auto import secret, or by using manual commands.

Important: A hub cluster cannot manage a different hub cluster. A hub cluster is set up to automatically import and manage itself as a local cluster. You do not have to manually import a hub cluster to manage itself. If you remove a hub cluster and try to import it again, you need to add the local-cluster:true label.

1.7.4.2.1. Prerequisites
  • A deployed hub cluster. If you are importing bare metal clusters, the hub cluster must be installed on a supported OpenShift Container Platform version.
  • A separate cluster you want to manage.
  • The OpenShift Container Platform CLI. See Getting started with the OpenShift CLI for information about installing and configuring the OpenShift Container Platform CLI.
  • A defined multiclusterhub.spec.imagePullSecret if you are importing a cluster that was not created by OpenShift Container Platform. This secret might have been created when multicluster engine for Kubernetes operator was installed. See Custom image pull secret for more information about how to define this secret.
1.7.4.2.2. Supported architectures
  • Linux (x86_64, s390x, ppc64le)
  • macOS
1.7.4.2.3. Preparing for cluster import

Before importing a managed cluster by using the CLI, you must complete the following steps:

  1. Log in to your hub cluster by running the following command:

    oc login
  2. Run the following command on the hub cluster to create the project and namespace. The cluster name that is defined in <cluster_name> is also used as the cluster namespace in the YAML file and commands:

    oc new-project <cluster_name>

    Important: The cluster.open-cluster-management.io/managedCluster label is automatically added to and removed from a managed cluster namespace. Do not manually add it to or remove it from a managed cluster namespace.

  3. Create a file named managed-cluster.yaml with the following example content:

    apiVersion: cluster.open-cluster-management.io/v1
    kind: ManagedCluster
    metadata:
      name: <cluster_name>
      labels:
        cloud: auto-detect
        vendor: auto-detect
    spec:
      hubAcceptsClient: true

    When the values for cloud and vendor are set to auto-detect, Red Hat Advanced Cluster Management detects the cloud and vendor types automatically from the cluster that you are importing. You can optionally replace the values for auto-detect with with the cloud and vendor values for your cluster. See the following example:

    cloud: Amazon
    vendor: OpenShift
  4. Apply the YAML file to the ManagedCluster resource by running the following command:

    oc apply -f managed-cluster.yaml

You can now continue with either Importing the cluster by using the auto import secret or Importing the cluster manually.

1.7.4.2.4. Importing a cluster by using the auto import secret

To import a managed cluster by using the auto import secret, you must create a secret that contains either a reference to the kubeconfig file of the cluster, or the kube API server and token pair of the cluster. Complete the following steps to import a cluster by using the auto import secret:

  1. Retrieve the kubeconfig file, or the kube API server and token, of the managed cluster that you want to import. See the documentation for your Kubernetes cluster to learn where to locate your kubeconfig file or your kube API server and token.
  2. Create the auto-import-secret.yaml file in the ${CLUSTER_NAME} namespace.

    1. Create a YAML file named auto-import-secret.yaml by using content that is similar to the following template:

      apiVersion: v1
      kind: Secret
      metadata:
        name: auto-import-secret
        namespace: <cluster_name>
      stringData:
        autoImportRetry: "5"
        # If you are using the kubeconfig file, add the following value for the kubeconfig file
        # that has the current context set to the cluster to import:
        kubeconfig: |- <kubeconfig_file>
        # If you are using the token/server pair, add the following two values instead of
        # the kubeconfig file:
        token: <Token to access the cluster>
        server: <cluster_api_url>
      type: Opaque
    2. Apply the YAML file in the <cluster_name> namespace by running the following command:

      oc apply -f auto-import-secret.yaml

      Note: By default, the auto import secret is used one time and deleted when the import process completes. If you want to keep the auto import secret, add managedcluster-import-controller.open-cluster-management.io/keeping-auto-import-secret to the secret. You can add it by running the following command:

      oc -n <cluster_name> annotate secrets auto-import-secret managedcluster-import-controller.open-cluster-management.io/keeping-auto-import-secret=""
  3. Validate the JOINED and AVAILABLE status for your imported cluster. Run the following command from the hub cluster:

    oc get managedcluster <cluster_name>
  4. Log in to the managed cluster by running the following command on the cluster:

    oc login
  5. You can validate the pod status on the cluster that you are importing by running the following command:

    oc get pod -n open-cluster-management-agent

You can now continue with Importing the klusterlet add-on.

1.7.4.2.5. Importing a cluster manually

Important: The import command contains pull secret information that is copied to each of the imported managed clusters. Anyone who can access the imported clusters can also view the pull secret information.

Complete the following steps to import a managed cluster manually:

  1. Obtain the klusterlet-crd.yaml file that was generated by the import controller on your hub cluster by running the following command:

    oc get secret <cluster_name>-import -n <cluster_name> -o jsonpath={.data.crds\\.yaml} | base64 --decode > klusterlet-crd.yaml
  2. Obtain the import.yaml file that was generated by the import controller on your hub cluster by running the following command:

    oc get secret <cluster_name>-import -n <cluster_name> -o jsonpath={.data.import\\.yaml} | base64 --decode > import.yaml

    Proceed with the following steps in the cluster that you are importing:

  3. Log in to the managed cluster that you are importing by entering the following command:

    oc login
  4. Apply the klusterlet-crd.yaml that you generated in step 1 by running the following command:

    oc apply -f klusterlet-crd.yaml
  5. Apply the import.yaml file that you previously generated by running the following command:

    oc apply -f import.yaml
  6. You can validate the JOINED and AVAILABLE status for the managed cluster that you are importing by running the following command from the hub cluster:

    oc get managedcluster <cluster_name>

You can now continue with Importing the klusterlet add-on.

1.7.4.2.6. Importing the klusterlet add-on

Implement the KlusterletAddonConfig klusterlet add-on configuration to enable other add-ons on your managed clusters. Create and apply the configuration file by completing the following steps:

  1. Create a YAML file that is similar to the following example:

    apiVersion: agent.open-cluster-management.io/v1
    kind: KlusterletAddonConfig
    metadata:
      name: <cluster_name>
      namespace: <cluster_name>
    spec:
      applicationManager:
        enabled: true
      certPolicyController:
        enabled: true
      iamPolicyController:
        enabled: true
      policyController:
        enabled: true
      searchCollector:
        enabled: true
  2. Save the file as klusterlet-addon-config.yaml.
  3. Apply the YAML by running the following command:

    oc apply -f klusterlet-addon-config.yaml

    Add-ons are installed after the managed cluster status you are importing is AVAILABLE.

  4. You can validate the pod status of add-ons on the cluster you are importing by running the following command:

    oc get pod -n open-cluster-management-agent-addon
1.7.4.2.7. Removing an imported cluster by using the command line interface

To remove a managed cluster by using the command line interface, run the following command:

oc delete managedcluster <cluster_name>

Replace <cluster_name> with the name of the cluster.

1.7.4.3. Importing a managed cluster by using agent registration

After you install multicluster engine for Kubernetes operator, you are ready to import a cluster and manage it by using the agent registration endpoint. Continue reading the following topics to learn how to import a managed cluster by using the agent registration endpoint.

1.7.4.3.1. Prerequisites
  • A deployed hub cluster. If you are importing bare metal clusters, the hub cluster must be installed on a supported OpenShift Container Platform version.
  • A cluster you want to manage.
  • The base64 command line tool.
  • A defined multiclusterhub.spec.imagePullSecret if you are importing a cluster that was not created by OpenShift Container Platform. This secret might have been created when multicluster engine for Kubernetes operator was installed. See Custom image pull secret for more information about how to define this secret.

    If you need to create a new secret, see Creating a new pull secret.

1.7.4.3.2. Supported architectures
  • Linux (x86_64, s390x, ppc64le)
  • macOS
1.7.4.3.3. Importing a cluster

To import a managed cluster by using the agent registration endpoint, complete the following steps:

  1. Get the agent registration server URL by running the following command on the hub cluster:

    export agent_registration_host=$(oc get route -n multicluster-engine agent-registration -o=jsonpath="{.spec.host}")

    Note: If your hub cluster is using a cluster-wide-proxy, make sure that you are using the URL that managed cluster can access.

  2. Get the cacert by running the following command:

    oc get configmap -n kube-system kube-root-ca.crt -o=jsonpath="{.data['ca\.crt']}" > ca.crt_

    Note: If you are not using the kube-root-ca issued endpoint, use the public agent-registration API endpoint CA instead of the kube-root-ca CA.

  3. Get the token for the agent registration sever to authorize by applying the following YAML content:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: managed-cluster-import-agent-registration-sa
      namespace: multicluster-engine
    ---
    apiVersion: v1
    kind: Secret
    type: kubernetes.io/service-account-token
    metadata:
      name: managed-cluster-import-agent-registration-sa-token
      namespace: multicluster-engine
      annotations:
        kubernetes.io/service-account.name: "managed-cluster-import-agent-registration-sa"
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: managedcluster-import-controller-agent-registration-client
    rules:
    - nonResourceURLs: ["/agent-registration/*"]
      verbs: ["get"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: managed-cluster-import-agent-registration
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: managedcluster-import-controller-agent-registration-client
    subjects:
      - kind: ServiceAccount
        name: managed-cluster-import-agent-registration-sa
        namespace: multicluster-engine
  4. Run the following command to export the token:

    export token=$(oc get secret -n multicluster-engine managed-cluster-import-agent-registration-sa-token -o=jsonpath='{.data.token}' | base64 -d)
  5. Enable the automatic approval and patch the content to cluster-manager by running the following command:

    oc patch clustermanager cluster-manager --type=merge -p '{"spec":{"registrationConfiguration":{"featureGates":[
    {"feature": "ManagedClusterAutoApproval", "mode": "Enable"}], "autoApproveUsers":["system:serviceaccount:multicluster-engine:agent-registration-bootstrap"]}}}'

    Note: You can also disable automatic approval and manually approve certificate signing requests from managed clusters.

  6. Switch to your managed cluster and get the cacert by running the following command:

    curl --cacert ca.crt -H "Authorization: Bearer $token" https://$agent_registration_host/agent-registration/crds/v1 | oc apply -f -
  7. Run the following command to import the managed cluster to the hub cluster. Replace <clusterName> with the name of you cluster. Replace <duration> with a time value. For example, 4h:

    Optional: Replace <klusterletconfigName> with the name of your KlusterletConfig.

    curl --cacert ca.crt -H "Authorization: Bearer $token" https://$agent_registration_host/agent-registration/manifests/<clusterName>?klusterletconfig=<klusterletconfigName>&duration=<duration> | oc apply -f -

    Note: The kubeconfig bootstrap in the klusterlet manifest does not expire if you do not set a duration.

1.7.4.4. Importing an on-premises Red Hat OpenShift Container Platform cluster manually by using central infrastructure management

After you install multicluster engine for Kubernetes operator, you are ready to import a managed cluster. You can import an existing OpenShift Container Platform cluster so that you can add additional nodes. Continue reading the following topics to learn more:

1.7.4.4.1. Prerequisites
  • Enable the central infrastructure management feature.
1.7.4.4.2. Importing a cluster

Complete the following steps to import an OpenShift Container Platform cluster manually, without a static network or a bare metal host, and prepare it for adding nodes:

  1. Create a namespace for the OpenShift Container Platform cluster that you want to import by applying the following YAML content:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: managed-cluster
  2. Make sure that a ClusterImageSet matching the OpenShift Container Platform cluster you are importing exists by applying the following YAML content:

    apiVersion: hive.openshift.io/v1
    kind: ClusterImageSet
    metadata:
      name: openshift-v4.15
    spec:
      releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863
  3. Add your pull secret to access the image by applying the following YAML content:

    apiVersion: v1
    kind: Secret
    type: kubernetes.io/dockerconfigjson
    metadata:
      name: pull-secret
      namespace: managed-cluster
    stringData:
      .dockerconfigjson: <pull-secret-json> 1
    1
    Replace <pull-secret-json> with your pull secret JSON.
  4. Copy the kubeconfig from your OpenShift Container Platform cluster to the hub cluster.

    1. Get the kubeconfig from your OpenShift Container Platform cluster by running the following command. Make sure that kubeconfig is set as the cluster being imported:

      oc get secret -n openshift-kube-apiserver node-kubeconfigs -ojson | jq '.data["lb-ext.kubeconfig"]' --raw-output | base64 -d > /tmp/kubeconfig.some-other-cluster

      Note: If your cluster API is accessed through a custom domain, you must first edit this kubeconfig by adding your custom certificates in the certificate-authority-data field and by changing the server field to match your custom domain.

    2. Copy the kubeconfig to the hub cluster by running the following command. Make sure that kubeconfig is set as your hub cluster:

      oc -n managed-cluster create secret generic some-other-cluster-admin-kubeconfig --from-file=kubeconfig=/tmp/kubeconfig.some-other-cluster
  5. Create an AgentClusterInstall custom resource by applying the following YAML content. Replace values where needed:

    apiVersion: extensions.hive.openshift.io/v1beta1
    kind: AgentClusterInstall
    metadata:
      name: <your-cluster-name> 1
      namespace: <managed-cluster>
    spec:
      networking:
        userManagedNetworking: true
      clusterDeploymentRef:
        name: <your-cluster>
      imageSetRef:
        name: openshift-v4.11.18
      provisionRequirements:
        controlPlaneAgents: 2
      sshPublicKey: <""> 3
    1
    Choose a name for your cluster.
    2
    Use 1 if you are using a single-node OpenShift cluster. Use 3 if you are using a multinode cluster.
    3
    Add the optional sshPublicKey field to log in to nodes for troubleshooting.
  6. Create a ClusterDeployment by applying the following YAML content. Replace values where needed:

    apiVersion: hive.openshift.io/v1
    kind: ClusterDeployment
    metadata:
      name: <your-cluster-name> 1
      namespace: managed-cluster
    spec:
      baseDomain: <redhat.com> 2
      installed: <true> 3
      clusterMetadata:
          adminKubeconfigSecretRef:
            name: <your-cluster-name-admin-kubeconfig> 4
          clusterID: <""> 5
          infraID: <""> 6
      clusterInstallRef:
        group: extensions.hive.openshift.io
        kind: AgentClusterInstall
        name: your-cluster-name-install
        version: v1beta1
      clusterName: your-cluster-name
      platform:
        agentBareMetal:
      pullSecretRef:
        name: pull-secret
    1
    Choose a name for your cluster.
    2
    Make sure baseDomain matches the domain you are using for your OpenShift Container Platform cluster.
    3
    Set to true to automatically import your OpenShift Container Platform cluster as a production environment cluster.
    4
    Reference the kubeconfig you created in step 4.
    5 6
    Leave clusterID and infraID empty in production environments.
  7. Add an InfraEnv custom resource to discover new hosts to add to your cluster by applying the following YAML content. Replace values where needed:

    Note: The following example might require additional configuration if you are not using a static IP address.

    apiVersion: agent-install.openshift.io/v1beta1
    kind: InfraEnv
    metadata:
      name: your-infraenv
      namespace: managed-cluster
    spec:
      clusterRef:
        name: your-cluster-name
        namespace: managed-cluster
      pullSecretRef:
        name: pull-secret
      sshAuthorizedKey: ""
Table 1.4. InfraEnv field table
FieldOptional or requiredDescription

clusterRef

Optional

The clusterRef field is optional if you are using late binding. If you are not using late b