Manage cluster
Manage cluster
摘要
Learn how to create, import, and manage clusters across cloud providers by using both the Red Hat Advanced Cluster Management for Kubernetes console.
Learn how to manage clusters across cloud providers in the following topics:
第 2 章 Supported clouds 复制链接链接已复制到粘贴板!
Learn about the cloud providers that are available with Red Hat Advanced Cluster Management for Kubernetes. Also, find the documented managed providers that are available.
Best practice: For managed cluster providers, use the latest version of Kubernetes.
2.1. Supported hub cluster provider 复制链接链接已复制到粘贴板!
Red Hat OpenShift Container Platform 4.3.18 or later, 4.4.4 or later, and 4.5.2 or later are supported for the hub cluster.
2.2. Supported managed cluster providers 复制链接链接已复制到粘贴板!
Red Hat OpenShift Container Platform 3.11.200 or later, 4.3.18 or later, 4.4.4 or later, and 4.5.2 or later are supported for the managed clusters.
See the available managed cluster options and documentation:
2.3. Configuring kubectl 复制链接链接已复制到粘贴板!
From vendor documentation previously listed, you might need to learn how configure your kubectl. You must have kubectl installed when you import a managed cluster to a hub cluster. See Importing a target managed cluster to the hub cluster for details.
第 3 章 Resizing a cluster 复制链接链接已复制到粘贴板!
You can customize your managed cluster specifications, such as virtual machine sizes and number of nodes. See the following list of recommended settings for each available provider, but also see the documentation for more specific information:
3.1. Amazon Web Services 复制链接链接已复制到粘贴板!
You can change the number of nodes of a Red Hat OpenShift Container Platform cluster that was created in an Amazon Web Services environment by modifying the MachineSet parameters on the hub cluster.
Remember: Because Red Hat Advanced Cluster Mangement for Kubernetes uses Hive for OpenShift to determine the number of nodes in the cluster, you must change the MachineSet parameter to change the number of nodes. If you just remove or add a node without changing the MachineSets parameter, nodes are added or removed to match the current value of that parameter.
See Recommended cluster scaling practices and Manually scaling a MachineSet in the OpenShift Container Platform documentation that applies to your version.
Tip: If you created the cluster by using the Red Hat Advanced Cluster Management for Kubernetes console, then it is an OpenShift Container Platform cluster.
If you are changing the number of nodes of an Amazon EKS cluster that you imported, see Cluster autoscaler for information about scaling the cluster.
3.2. Google Cloud Platform 复制链接链接已复制到粘贴板!
You can change the number of nodes of a Red Hat OpenShift Container Platform cluster that was created in an Google Cloud Platform environment by modifying the MachineSet parameters on the hub cluster.
Remember: Because Red Hat Advanced Cluster Mangement for Kubernetes uses Hive for OpenShift to determine the number of nodes in the cluster, you must change the MachineSet parameter to change the number of nodes. If you just remove or add a node without changing the MachineSets parameter, nodes are added or removed to match the current value of that parameter.
See Recommended cluster scaling practices and Manually scaling a MachineSet in the OpenShift Container Platform documentation that applies to your version for more information about scaling your cluster. Tip: If you created the cluster by using Red Hat Advanced Cluster Management, then it is an OpenShift Container Platform cluster.
If you are changing the number of nodes of a Google Kubernetes Engine cluster that you imported, see Resizing a cluster for information about scaling the cluster.
3.3. Microsoft Azure 复制链接链接已复制到粘贴板!
You can change the number of nodes of a Red Hat OpenShift Container Platform cluster that was created in a Microsoft Azure environment by modifying the MachineSet parameters on the hub cluster.
Remember: Because Red Hat Advanced Cluster Mangement for Kubernetes uses Hive for OpenShift to determine the number of nodes in the cluster, you must change the MachineSet parameter to change the number of nodes. If you just remove or add a node without changing the MachineSets parameter, nodes are added or removed to match the current value of that parameter.
See Recommended cluster scaling practices and Manually scaling a MachineSet in the OpenShift Container Platform documentation that applies to your version. Tip: If you created the cluster by using Red Hat Advanced Cluster Management for Kubernetes, then it is an OpenShift Container Platform cluster.
If you are changing the number of nodes of an Azure Kubernetes Services cluster that you imported, see Scaling a cluster for information about scaling the cluster.
3.4. Bare metal cluster 复制链接链接已复制到粘贴板!
You can change the number of nodes of a Red Hat OpenShift Container Platform cluster that was created in a bare metal environment by modifying the MachineSet parameters on the hub cluster.
Remember: Because Red Hat Advanced Cluster Mangement for Kubernetes uses Hive for OpenShift to determine the number of nodes in the cluster, you must change the MachineSet parameter to change the number of nodes. If you just remove or add a node without changing the MachineSets parameter, nodes are added or removed to match the current value of that parameter.
See Recommended cluster scaling practices and Manually scaling a MachineSet in the OpenShift Container Platform documentation that applies to your version. Tip: If you created the cluster by using Red Hat Advanced Cluster Management for Kubernetes, then it is an OpenShift Container Platform cluster.
If you are changing the number of nodes of a bare metal cluster that you imported, see Installing a cluster on bare metal with network customizations for information about scaling the cluster.
Note: Bare metal clusters are only supported when the hub cluster is OpenShift Container Platform version 4.5, and later.
3.5. IBM Kubernetes Service 复制链接链接已复制到粘贴板!
If you are changing the number of nodes of an IBM Kubernetes Service cluster that you imported, see Adding worker nodes and zones to clusters for information about scaling the cluster.
Remember: Because Red Hat Advanced Cluster Mangement for Kubernetes uses Hive for OpenShift to determine the number of nodes in the cluster, you must change the MachineSet parameter to change the number of nodes. If you just remove or add a node without changing the MachineSets parameter, nodes are added or removed to match the current value of that parameter.
第 4 章 Release images 复制链接链接已复制到粘贴板!
When you create a cluster on a provider by using the Red Hat Advanced Cluster Management for Kubernetes, you must specify a release image to use for the new cluster. The release image specifies which version of Red Hat OpenShift Container Platform is used to build the cluster.
The files that reference the release images are yaml files that are maintained in the acm-hive-openshift-releases GitHub repository. Red Hat Advanced Cluster Management for Kubernetes uses those files to create the list of the available release images in the console. The repository contains the clusterImageSets directory and the subscription directory, which are the directories that you use when working with the release images.
The clusterImageSets directory contains the following directories:
- Fast - Contains files that reference the latest two versions of the release images for each OpenShift Container Platform version that is supported
- Releases - Contains files that reference all of the release images for each OpenShift Container Platform version that is supported. Note: These releases have not all been tested and determined to be stable.
- Stable - Contains files that reference the latest two stable versions of the release images for each OpenShift Container Platform version that is supported. The release images in this folder are tested and verified.
The subscription directory contains files that specify where the list of release images is pulled from. The default release images for Red Hat Advanced Cluster Management are provided in a Quay.io directory. They are referenced by the files in the acm-hive-openshift-releases GitHub repository.
4.1. Synchronizing available release images 复制链接链接已复制到粘贴板!
The release images are updated frequently, so you might want to synchronize the list of release images to ensure that you can select the latest available versions. The release images are available in the acm-hive-openshift-releases GitHub repository.
There are three levels of stability of the release images:
| Category | Description |
| stable | Fully tested images that are confirmed to install and build clusters correctly. |
| fast | Partially tested, but likely less stable than a stable version. |
| candidate | Not tested, but the most current image. Might have some bugs. |
Complete the following steps to refresh the list:
- Clone the acm-hive-openshift-releases GitHub repository.
Connect to your Red Hat Advanced Cluster Management for Kubernetes hub cluster by entering the following command:
oc apply -k subscription/
oc apply -k subscription/Copy to Clipboard Copied! Toggle word wrap Toggle overflow After about one minute, the latest two
fastentries are available.To synchronize your list of
stablerelease images after you have cloned theacm-hive-openshift-releasesGitHub repository, enter the following command to update thestableimages:make subscribe-stable
make subscribe-stableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note: You can only run this
makecommand when you are using the Linux or MacOS operating system. If you are using the Windows operating system, enter the following command to update thestableimages:oc apply -f subscription-stable
oc apply -f subscription-stableCopy to Clipboard Copied! Toggle word wrap Toggle overflow After running this command, the list of available
stablerelease images updates with the currently available images in about one minute.To synchronize and display the fast release images, enter the following command:
make subscribe-fast
make subscribe-fastCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note: You can only run this
makecommand when you are using the Linux or MacOS operating system. If you are using the Windows operating system, enter the following command to updated thefastimages:oc apply -f subscription/subscription-fast.yaml
oc apply -f subscription/subscription-fast.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow After running the command, the list of available
stableandfastrelease images updates with the currently available images in about one minute.To synchronize and display the
candidaterelease images, enter the following command:make subscribe-candidate
make subscribe-candidateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note: You can only run this
makecommand when you are using the Linux or MacOS operating system. If you are using the Windows operating system, enter the following command to updated thecandidateimages:oc apply -f subscription/subscription-candidate.yaml
oc apply -f subscription/subscription-candidate.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow After running the command, the list of available
stable,fast, andcandidaterelease images updates with the currently available images in about 1 minute.
- View the list of currently available release images in the Red Hat Advanced Cluster Management console when you are creating a cluster.
You can unsubscribe from any of these channels to stop viewing the updates by entering a command in the following format:
oc delete -f subscription/subscription-stable
oc delete -f subscription/subscription-stableCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You might want to ensure that you use the same release image for all of your clusters. To simplify, you can create your own custom list of release images that are available when creating a cluster. Complete the following steps to manage your available release images:
- Fork the acm-hive-openshift-releases GitHub repository.
Update the
./subscription/channel.yamlfile by changing thespec: pathnameto access your the GitHub name for your forked repository, instead ofopen-cluster-management. This step specifies where the hub cluster retrieves the release images. Your updated content should look similar to the following example:spec: type: GitHub pathname: https://github.com/<forked_content>/acm-hive-openshift-releases.git
spec: type: GitHub pathname: https://github.com/<forked_content>/acm-hive-openshift-releases.gitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace forked_content with the path to your forked repository.
-
Add the
yamlfiles for the images that you want available when you create a cluster by using the Red Hat Advanced Cluster Management for Kubernetes console to the./clusterImageSets/stable/or./clusterImageSets/fast/directory. Tip: You can retrieve the availableyamlfiles from the main repository by merging changes into your forked repository. - Commit and merge your changes to your forked repository.
To synchronize your list of stable release images after you have cloned the
acm-hive-openshift-releasesrepository, enter the following command to update the stable images:make subscribe-stable
make subscribe-stableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note: You can only run this
makecommand when you are using the Linux or MacOS operating system. If you are using the Windows operating system, enter the following commands:oc apply -k subscription/ oc delete -f subscription/subscription-fast.yaml oc apply -f subscription/subscription-stable.yaml
oc apply -k subscription/ oc delete -f subscription/subscription-fast.yaml oc apply -f subscription/subscription-stable.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow After running this command, the list of available stable release images updates with the currently available images in about one minute.
By default, only the stable images are listed. To synchronize and display the fast release images, enter the following command:
make subscribe-fast
make subscribe-fastCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note: You can only run this
makecommand when you are using the Linux or MacOS operating system. If you are using the Windows operating system, enter the following commands:oc apply -k subscription/ oc apply -f subscription/subscription-fast.yaml
oc apply -k subscription/ oc apply -f subscription/subscription-fast.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow After running this command, the list of available fast release images updates with the currently available images in about 1 minute.
By default, Red Hat Advanced Cluster Management pre-loads a few ClusterImageSets. Use the following commands to list what is available and remove the defaults, if desired.
oc get clusterImageSets oc delete clusterImageSet <clusterImageSet_NAME>
oc get clusterImageSets oc delete clusterImageSet <clusterImageSet_NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - View the list of currently available release images in the Red Hat Advanced Cluster Management console when you are creating a cluster.
In some cases, you need to maintain a custom list of release images when the hub cluster has no Internet connection. You can create your own custom list of release images that are available when creating a cluster. Complete the following steps to manage your available release images while disconnected:
- While you are on a connected system, navigate to the acm-hive-openshift-releases GitHub repository.
-
Copy the
clusterImageSetsdirectory to a system that can access the disconnected Red Hat Advanced Cluster Management for Kubernetes hub cluster. -
Add the
yamlfiles for the images that you want available when you create a cluster by using the Red Hat Advanced Cluster Management for Kubernetes console by manually adding theclusterImageSetyamls. Create
clusterImageSetscommand:oc create -f <clusterImageSet_FILE>
oc create -f <clusterImageSet_FILE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After running this command for each resource you want to add, the list of available release images will be available.
- Alternately you can paste the image url directly in the the create cluster console in Red Hat Advanced Cluster Management. This will create new clusterImageSets if they do not exist.
- View the list of currently available release images in the Red Hat Advanced Cluster Management console when you are creating a cluster.
第 5 章 Creating and modifying bare metal assets 复制链接链接已复制到粘贴板!
Important: The bare metal cluster function is a technology preview,and should not be used in production environments.
Bare metal assets are virtual or physical servers that are configured to run your cloud operations. Red Hat Advanced Cluster Management for Kubernetes connects to a bare metal asset that your administrator creates, and can create clusters on it.
You must create a bare metal asset in Red Hat Advanced Cluster Management for Kubernetes to create a cluster on it. Use the following procedure to create a bare metal asset that can host a cluster that is managed by Red Hat Advanced Cluster Management for Kubernetes.
The bare metal features are only provided as a technology preview. The bare metal options are hidden by feature flags, by default. To view the bare metal options, you must enable the feature flags by completing the instructions in the Prerequisites section.
5.1. Prerequisites 复制链接链接已复制到粘贴板!
You need the following prerequisites before creating a bare metal asset:
- A deployed Red Hat Advanced Cluster Management for Kubernetes hub cluster on OpenShift Container Platform version 4.5, or later.
- Access for your Red Hat Advanced Cluster Management for Kubernetes hub cluster to connect to the bare metal asset.
A configured bare metal asset, and log in credentials with the required permissions to log in and manage it. Note: Login credentials for your bare metal asset include the following items for the asset that are provided by your administrator:
- user name
- password
- Baseboard Management Controller Address
- boot NIC MAC address
Bare metal feature flags that are enabled to view the bare metal options. The bare metal options are hidden by feature flags by default. Complete the following steps to enable the feature flags:
- Start the Red Hat OpenShift Container Platform command line interface.
Set the
featureFlags_baremetalsetting to true for theconsole-headercontainer by entering the following command:oc patch deploy console-header -n <namespace> -p '{"spec":{"template":{"spec":{"containers":[{"name":"console-header","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'oc patch deploy console-header -n <namespace> -p '{"spec":{"template":{"spec":{"containers":[{"name":"console-header","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <namespace> with your Red Hat Advanced Cluster Management project namespace.
After the update, your
consoleuiCRD should look like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
featureFlags_baremetalvalue to true for thehmc-uicontainer:oc patch -n <namespace> $(oc get deploy -o name | grep consoleui) -p '{"spec":{"template":{"spec":{"containers":[{"name":"hcm-ui","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'oc patch -n <namespace> $(oc get deploy -o name | grep consoleui) -p '{"spec":{"template":{"spec":{"containers":[{"name":"hcm-ui","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <namespace> with your Red Hat Advanced Cluster Management project namespace.
Your update should look like the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure the
console-chart-...-consoleui...andconsole-header-...pods are running:oc -n open-cluster-management get pods
oc -n open-cluster-management get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - When the pods are running again, log out of the Red Hat Advanced Cluster Management for Kubernetes console and log back in. The bare metal options are now included in the console.
5.2. Creating a bare metal asset with the console 复制链接链接已复制到粘贴板!
To create a bare metal asset using the Red Hat Advanced Cluster Management for Kubernetes console, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Bare metal assets.
- On the Bare metal assets page, Click Create bare metal asset.
Enter a name for your asset that identifies it when you create a cluster.
提示You can view the
yamlcontent updates as you enter the information in the console by setting the YAML switch to ON.- Enter the namespace where you want to create the bare metal asset. Users who have access to this namespace can associate this asset to the cluster when creating a cluster.
Enter the Baseboard Management Conroller address. This is the controller that enables communication with the host. The following protocols are supported:
- IPMI, see IPMI 2.0 Specification for more information.
- iDRAC, see Support for Integrated Dell Remote Access Controller 9 (iDRAC9) for more information.
- iRMC, see Data Sheet: FUJITSU Software ServerView Suite integrated Remote Management Controller - iRMC S5 for more information.
- Redfish, see Redfish specification for more information.
- Enter the user name and password for the bare metal asset.
- Add the boot NIC MAC address for the bare metal asset. This is the MAC address of the host’s network-connected NIC that is used to provision the host on the bare metal asset.
You can continue with Creating a cluster on bare metal.
5.3. Modifying a bare metal asset 复制链接链接已复制到粘贴板!
If you need to modify the settings for a bare metal asset, complete the following steps:
- In the Red Hat Advanced Cluster Management for Kubernetes console navigation, select: Automate infrastructure > Bare metal assets.
- Select the options menu for the asset that you want to modify in the table.
- Select Modify.
5.4. Removing a bare metal asset 复制链接链接已复制到粘贴板!
When a bare metal asset is no longer used for any of the clusters, you can remove it from the list of available bare metal assets. Removing unused assets both simplifies your list of available assets, and prevents the accidental selection of that asset.
To remove a bare metal asset, complete the following steps:
- In the Red Hat Advanced Cluster Management for Kubernetes console navigation, select: Automate infrastructure > Bare metal assets.
- Select the options menu for the asset that you want to remove in the table.
- Select Delete.
第 6 章 Creating a provider connection 复制链接链接已复制到粘贴板!
A provider connection is required to create a Red Hat OpenShift Container Platform cluster on a cloud service provider with Red Hat Advanced Cluster Management for Kubernetes.
The provider connection stores the access credentials and configuration information for a provider. Each provider account requires its own provider connection, as does each domain on a single provider.
The following files detail the information that is required for creating a connection document for each supported provider:
You need a provider connection to use Red Hat Advanced Cluster Management for Kubernetes console to deploy and manage an OpenShift cluster on Amazon Web Services (AWS).
This procedure must be done before you can create a cluster with Red Hat Advanced Cluster Management for Kubernetes.
6.1.1. Prerequisites 复制链接链接已复制到粘贴板!
You must have the following prerequisites before creating a provider connection:
- A deployed Red Hat Advanced Cluster Management for Kubernetes hub cluster
- Internet access for your Red Hat Advanced Cluster Management for Kubernetes hub cluster so it can create the Kubernetes cluster on Amazon Web Services
- Amazon Web Services (AWS) login credentials, which include access key ID and secret access key. See Understanding and getting your security credentials.
- Account permissions that allow installing clusters on AWS. See Configuring an AWS account for instructions on how to configure.
To create a provider connection from the Red Hat Advanced Cluster Management for Kubernetes console, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Clusters.
On the Clusters page, select the Provider connections tab.
Existing provider connections are displayed.
- Select Add a connection.
- Select Amazon Web Services as your provider.
- Add a name for your provider connection.
Select a namespace for your provider connection from the list.
提示Create a namespace specifically to host your provider connections, both for convenience and added security.
- You can optionally add a Base DNS domain for your provider connection. If you add the base DNS domain to the provider connection, it is automatically populated in the correct field when you create a cluster with this provider connection.
- Add your AWS Access Key ID for your Amazon Web Services account. Log in to AWS to find the ID.
- Add your AWS Secret Access Key ID.
- Enter your Red Hat OpenShift Pull Secret. You can download your pull secret from Pull secret.
- Add your SSH Private Key and SSH Public Key, which allows you to connect to the cluster. You can use an existing key pair, or create a new one with key generation program. See Generating an SSH private key and adding it to the agent for more information about how to generate a key.
- Click Create. When you create the provider connection, it is added to the list of provider connections.
You can create a cluster that uses this provider connection by completing the steps in Creating a cluster on Amazon Web Services.
6.1.3. Deleting your provider connection 复制链接链接已复制到粘贴板!
When you are no longer managing a cluster that is using a provider connection, delete the provider connection to protect the information in the provider connection.
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- Select Provider connections.
- Select the options menu beside the provider connection that you want to delete.
- Select Delete connection.
6.2. Creating a provider connection for Microsoft Azure 复制链接链接已复制到粘贴板!
You need a provider connection to use Red Hat Advanced Cluster Management for Kubernetes console to create and manage a Red Hat OpenShift Container Platform cluster on Microsoft Azure.
This procedure is a prerequisite for creating a cluster with Red Hat Advanced Cluster Management for Kubernetes.
6.2.1. Prerequisites 复制链接链接已复制到粘贴板!
You must have the following prerequisites before creating a provider connection:
- A deployed Red Hat Advanced Cluster Management for Kubernetes hub cluster
- Internet access for your Red Hat Advanced Cluster Management for Kubernetes hub cluster so that it can create the Kubernetes cluster on Azure
- Azure login credentials, which include your Base Domain Resource Group and Azure Service Principal JSON. See azure.microsoft.com.
- Account permissions that allow installing clusters on Azure. See How to configure Cloud Services and Configuring an Azure account for more information.
To create a provider connection from the Red Hat Advanced Cluster Management for Kubernetes console, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Clusters.
On the Clusters page, select the Provider connections tab.
Existing provider connections are displayed.
- Select Add a connection.
- Select Microsoft Azure as your provider.
- Add a name for your provider connection.
Select a namespace for your provider connection from the list.
提示You can create a namespace specifically to host your provider connections, both for convenience and added security.
- You can optionally add a Base DNS domain for your provider connection. If you add the base DNS domain to the provider connection, it is automatically populated in the correct field when you create a cluster with this provider connection.
- Add your Base Domain Resource Group Name for your Azure account. This entry is the resource name that you created with your Azure account. You can find your Base Domain Resource Group Name by selecting Home > DNS Zones in the Azure interface. Your Base Domain Resource Group name is in the Resource Group column of the entry that contains the Base DNS domain that applies to your account.
Add your Client ID. This value is generated as the
appIdproperty when you create a service principal with the following command:az ad sp create-for-rbac --role Contributor --name <service_principal>
az ad sp create-for-rbac --role Contributor --name <service_principal>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace service_principal with the name of your service principal.
Add your Client Secret. This value is generated as the
passwordproperty when you create a service principal with the following command:az ad sp create-for-rbac --role Contributor --name <service_principal>
az ad sp create-for-rbac --role Contributor --name <service_principal>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace service_principal with the name of your service principal.
Add your Subscription ID. This value is the
idproperty in the output of the following command:az account show
az account showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add your Tenant ID. This value is the
tenantIdproperty in the output of the following command:az account show
az account showCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enter your Red Hat OpenShift Pull Secret. You can download your pull secret from Pull secret.
- Add your SSH Private Key and SSH Public Key to use to connect to the cluster. You can use an existing key pair, or create a new pair using a key generation program. See Generating an SSH private key and adding it to the agent for more information about how to generate a key.
- Click Create. When you create the provider connection, it is added to the list of provider connections.
You can create a cluster that uses this provider connection by completing the steps in Creating a cluster on Microsoft Azure.
6.2.3. Deleting your provider connection 复制链接链接已复制到粘贴板!
When you are no longer managing a cluster that is using a provider connection, delete the provider connection to protect the information in the provider connection.
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- Select Provider connections.
- Select the options menu for the provider connection that you want to delete.
- Select Delete connection.
You need a provider connection to use Red Hat Advanced Cluster Management for Kubernetes console to create and manage a Red Hat OpenShift Container Platform cluster on Google Cloud Platform (GCP).
This procedure is a prerequisite for creating a cluster with Red Hat Advanced Cluster Management for Kubernetes.
6.3.1. Prerequisites 复制链接链接已复制到粘贴板!
You must have the following prerequisites before creating a provider connection:
- A deployed Red Hat Advanced Cluster Management for Kubernetes hub cluster
- Internet access for your Red Hat Advanced Cluster Management for Kubernetes hub cluster so it can create the Kubernetes cluster on GCP
- GCP login credentials, which include user Google Cloud Platform Project ID and Google Cloud Platform service account JSON key. See Creating and managing projects.
- Account permissions that allow installing clusters on GCP. See Configuring a GCP project for instructions on how to configure an account.
To create a provider connection from the Red Hat Advanced Cluster Management for Kubernetes console, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Clusters.
On the Clusters page, select the Provider connections tab.
Existing provider connections are displayed.
- Select Add a connection.
- Select Google Cloud Platform as your provider.
- Add a name for your provider connection.
Select a namespace for your provider connection from the list.
提示Create a namespace specifically to host your provider connections, for both convenience and security.
- You can optionally add a Base DNS domain for your provider connection. If you add the base DNS domain to the provider connection, it is automatically populated in the correct field when you create a cluster with this provider connection.
- Add your Google Cloud Platform Project ID for your GCP account. Log in to GCP to retrieve your settings.
Add your Google Cloud Platform service account JSON key. Complete the following steps to create one with the correct permissions:
- In the GCP main menu, select IAM & Admin and start the Service Accounts applet
- Select Create Service Account.
- Provide the Name, Service account ID, and Description of your service account.
- Select Create to create the service account.
- Select a role of Owner, and click Continue.
- Click Create Key
- Select JSON, and click Create.
- Save the resulting file to your computer.
- Provide the contents for the Google Cloud Platform service account JSON key.
- Enter your Red Hat OpenShift Pull Secret. You can download your pull secret from Pull secret.
- Add your SSH Private Key and SSH Public Key so you can access the cluster. You can use an existing key pair, or create a new pair using a key generation program. See Generating an SSH private key and adding it to the agent for more information about how to generate a key.
- Click Create. When you create the provider connection, it is added to the list of provider connections.
You can use this connection when you create a cluster by completing the steps in Creating a cluster on Google Cloud Platform.
6.3.3. Deleting your provider connection 复制链接链接已复制到粘贴板!
When you are no longer managing a cluster that is using a provider connection, delete the provider connection to protect the information in the provider connection.
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- Select Provider connections.
- Select the options menu beside the provider connection that you want to delete.
- Select Delete connection.
6.4. Creating a provider connection for bare metal 复制链接链接已复制到粘贴板!
Important: The bare metal cluster function is a technology preview,and should not be used in production environments.
You need a provider connection to use Red Hat Advanced Cluster Management for Kubernetes console to deploy and manage a Red Hat OpenShift Container Platform cluster in a bare metal environment.
The options for bare metal in the console are for technology preview only, and are hidden by feature flags by default. See the instructions for enabling the feature flags in the Prerequisites section.
6.4.1. Prerequisites 复制链接链接已复制到粘贴板!
You need the following prerequisites before creating a provider connection:
- A Red Hat Advanced Cluster Management for Kubernetes hub cluster that is deployed. When managing bare metal clusters, you must have the hub cluster installed on Red Hat OpenShift Container Platform version 4.5, or later.
- Internet access for your Red Hat Advanced Cluster Management for Kubernetes hub cluster so it can create the Kubernetes cluster on your bare metal server
- Your bare metal server login credentials, which include the libvirt URI, SSH Private Key, and a list of SSH known hosts; see Generating an SSH private key and adding it to the agent
- Account permissions that allow installing clusters on the bare metal infrastructure
Bare metal feature flags that are enabled to view the bare metal options. The bare metal options are hidden by feature flags by default. Complete the following steps to enable the feature flags:
- Start the Red Hat OpenShift Container Platform command line interface.
Set the
featureFlags_baremetalsetting to true for theconsole-headercontainer by entering the following command:oc patch deploy console-header -n <namespace> -p '{"spec":{"template":{"spec":{"containers":[{"name":"console-header","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'oc patch deploy console-header -n <namespace> -p '{"spec":{"template":{"spec":{"containers":[{"name":"console-header","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <namespace> with your Red Hat Advanced Cluster Management project namespace.
After the update, your
consoleuiCRD should look like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
featureFlags_baremetalvalue to true for thehmc-uicontainer:oc patch -n <namespace> $(oc get deploy -o name | grep consoleui) -p '{"spec":{"template":{"spec":{"containers":[{"name":"hcm-ui","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'oc patch -n <namespace> $(oc get deploy -o name | grep consoleui) -p '{"spec":{"template":{"spec":{"containers":[{"name":"hcm-ui","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <namespace> with your Red Hat Advanced Cluster Management project namespace.
Your update should look like the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure the
console-chart-...-consoleui...andconsole-header-...pods are running:oc -n open-cluster-management get pods
oc -n open-cluster-management get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - When the pods are running again, log out of the Red Hat Advanced Cluster Management for Kubernetes console and log back in. The bare metal options are now included in the console.
To create a provider connection from the Red Hat Advanced Cluster Management for Kubernetes console, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Clusters.
On the Clusters page, select the Provider connections tab.
Existing provider connections are displayed.
- Select Add connection.
- Select Bare metal as your provider.
- Add a name for your provider connection.
Select a namespace for your provider connection from the list.
Tip: Create a namespace specifically to host your provider connections, both for convenience and added security.
- You can optionally add a Base DNS domain for your provider connection. If you add the base DNS domain to the provider connection, it is automatically populated in the correct field when you create a cluster with this provider connection.
- Add your libvirt URI. See Connection URIs for more information.
- Enter your Red Hat OpenShift Pull Secret. You can download your pull secret from Pull secret.
- Add your SSH Private Key and your SSH Public Key so you can access the cluster. You can use an existing key, or use a key generation program to create a new one. See Generating an SSH private key and adding it to the agent for more information about how to generate a key.
- Add a list of your SSH known hosts.
For disconnected installations only: Complete the fields in the Configuration for disconnected installation subsection with the required information:
-
Image Registry Mirror: This optional value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example:
repository.com:5000/openshift/ocp-release. - Bootstrap OS Image: This value contains the URL to the image to use for the bootstrap machine.
- Cluster OS Image: This value contains the URL to the image to use for Red Hat OpenShift Container Platform cluster machines.
- Additional Trust Bundle: This value provides the contents of the certificate file that is required to access the mirror registry.
-
Image Registry Mirror: This optional value contains the disconnected registry path. The path contains the hostname, port, and repository path to all of the installation images for disconnected installations. Example:
- Click Create. When you create the provider connection, it is added to the list of provider connections.
You can create a cluster that uses this provider connection by completing the steps in Creating a cluster on bare metal.
6.4.3. Deleting your provider connection 复制链接链接已复制到粘贴板!
When you are no longer managing a cluster that is using a provider connection, delete the provider connection to protect the information in the provider connection.
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- Select Provider connections.
- Select the options menu beside the provider connection that you want to delete.
- Select Delete connection.
Learn how to create Red Hat OpenShift Container Platform clusters across cloud providers with Red Hat Advanced Cluster Management for Kubernetes.
- Creating a cluster on Amazon Web Services
- Creating a cluster on Google Cloud Platform
- Creating a cluster on Microsoft Azure
- Creating a cluster on bare metal (Requires Red Hat OpenShift Container Platform version 4.4, or later)
7.1. Creating a cluster on Amazon Web Services 复制链接链接已复制到粘贴板!
You can use the Red Hat Advanced Cluster Management for Kubernetes console to create a Red Hat OpenShift Container Platform cluster on Amazon Web Services (AWS).
7.1.1. Prerequisites 复制链接链接已复制到粘贴板!
You must have the following prerequisites before creating a cluster on AWS:
- A deployed Red Hat Advanced Cluster Management for Kubernetes hub cluster
- Internet access for your Red Hat Advanced Cluster Management for Kubernetes hub cluster so it can create the Kubernetes cluster on Amazon Web Services
- AWS provider connection. See Creating a provider connection for Amazon Web Services for more information.
- A configured domain in AWS. See Configuring an AWS account for instructions on how to configure a domain.
- Amazon Web Services (AWS) login credentials, which include user name, password, access key ID, and secret access key. See Understanding and Getting Your Security Credentials.
- A Red Hat OpenShift image pull secret. See Using image pull secrets.
Note: If you change your cloud provider access key, you must manually update the provisioned cluster access key. For more information, see the known issue, Automatic secret updates for provisioned clusters is not supported.
To create clusters from the Red Hat Advanced Cluster Management for Kubernetes console, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- On the Clusters page, Click Add Cluster.
Select Create a cluster.
注意This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Importing a target managed cluster to the hub cluster for those steps.
Enter a name for your cluster. This name is used in the hostname of the cluster.
提示You can view the
yamlcontent updates as you enter the information in the console by setting the YAML switch to ON.- Select Amazon Web Services for the infrastructure platform.
- Specify a Release image that you want to use for the cluster. This identifies the version of the Red Hat OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the url to the image that you want to use. See Release images for more information about release images.
- Select your provider connection from the available connections on the list. If you do not have one configured, or want to configure a new one, select Add connection. See Creating a provider connection for Amazon Web Services for more information about creating a provider connection.
- Enter the base domain information that you configured for your AWS account. If there is already a base domain associated with the selected provider connection, that value is populated in that field. You can change the value by overwriting it. See Configuring an AWS account for more information. This name is used in the hostname of the cluster.
- Add the Labels that you want to associate with your cluster. These labels help to identify the cluster and limit search results.
Configure the Node pools for your cluster.
The node pools define the location and size of the nodes that are used for your cluster.
The Region specifies where the nodes are located geographically. A closer region might provide faster performance, but a more distant region might be more distributed.
- Master pool: There are three Master nodes that are created for your cluster in the master pool. The master nodes share the management of the cluster activity. You can select multiple zones within the region for a more distributed group of master nodes. You can change the type and size of your instance after it is created, but you can also specify it in this section. The default values are mx4.xlarge - 4 vCPU, 16 GiB RAM - General Purpose with 500 GiB of root storage.
- Worker pools: You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools.
- Optional: Configure the cluster networking options.
- Optional: Configure a label for the cluster.
Click Create. You can view your cluster details after the create and import process is complete.
注意You do not have to run the
kubectlcommand that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of Red Hat Advanced Cluster Management for Kubernetes.
7.1.3. Accessing your cluster 复制链接链接已复制到粘贴板!
To access a cluster that is managed by Red Hat Advanced Cluster Management for Kubernetes, complete the following steps:
- From the Red Hat Advanced Cluster Management for Kubernetes navigation menu, navigate to Automate infrastructure > Clusters.
- Select the name of the cluster that you created or want to access. The cluster details are displayed.
- Select Reveal credentials to view the user name and password for the cluster. Note these values to use when you log in to the cluster.
- Select Console URL to link to the cluster.
- Log in to the cluster by using the user ID and password that you found in step 3.
- Select the Actions menu for the cluster that you want to access.
Select Launch to cluster.
提示If you already know the log in credentials, you can access the cluster by selecting the Actions menu for the cluster, and selecting Launch to cluster.
7.1.4. Removing a cluster from management 复制链接链接已复制到粘贴板!
When you remove a Red Hat OpenShift Container Platform cluster from management that was created with Red Hat Advanced Cluster Management for Kubernetes, you can either detach it or destroy it.
Detaching a cluster removes it from management, but does not completely delete it. You can import it again, if you decide that you want to bring it back under management. This is only an option when the cluster is in a Ready state.
Destroying a cluster removes it from management and deletes the components of the cluster. This is permanent, and it cannot be brought back under management after deletion.
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- Select the option menu beside the cluster that you want to delete.
- Select Destroy cluster or Detach cluster.
You can detach or destroy multiple clusters by selecting the check boxes of the clusters that you want to detach or destroy. Then select Detach or Destroy.
7.2. Creating a cluster on Microsoft Azure 复制链接链接已复制到粘贴板!
You can use the Red Hat Advanced Cluster Management for Kubernetes console to deploy a Red Hat OpenShift Container Platform cluster on Microsoft Azure.
7.2.1. Prerequisites 复制链接链接已复制到粘贴板!
You must have the following prerequisites before creating a cluster on Azure:
- A deployed Red Hat Advanced Cluster Management for Kubernetes hub cluster
- Internet access for your Red Hat Advanced Cluster Management for Kubernetes hub cluster so it can create the Kubernetes cluster on Azure
- Azure provider connection. See Creating a provider connection for Microsoft Azure for more information.
- A configured domain in Azure. See Configuring a custom domain name for an Azure cloud service for instructions on how to configure a domain.
- Azure login credentials, which include user name and password. See azure.microsoft.com.
-
Azure service principals, which include
clientId,clientSecret, andtenantId. See azure.microsoft.com. - A Red Hat OpenShift image pull secret. See Using image pull secrets.
Note: If you change your cloud provider access key, you must manually update the provisioned cluster access key. For more information, see the known issue, Automatic secret updates for provisioned clusters is not supported.
To create clusters from the Red Hat Advanced Cluster Management for Kubernetes console, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- On the Clusters page, Click Add Cluster.
Select Create a cluster.
注意This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Importing a target managed cluster to the hub cluster for those steps.
Enter a name for your cluster. This name is used in the hostname of the cluster.
提示You can view the
yamlcontent updates as you enter the information in the console by setting the YAML switch to ON.- Select Microsoft Azure for the infrastructure platform.
- Specify a Release image that you want to use for the cluster. This identifies the version of the Red Hat OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the url to the image that you want to use. See Release images for more information about release images.
- Select your provider connection from the available connections on the list. If you do not have one configured, or want to configure a new one, select Add connection . See Creating a provider connection for Microsoft Azure for more information about creating a provider connection.
- Enter the base domain information that you configured for your Azure account. If there is already a base domain associated with the selected provider connection, that value is populated in that field. You can change the value by overwriting it. See Configuring a custom domain name for an Azure cloud service for more information. This name is used in the hostname of the cluster.
- Add the Labels that you want to associate with your cluster. These labels help to identify the cluster and limit search results.
Configure the Node pools for your cluster.
The node pools define the location and size of the nodes that are used for your cluster.
The Region specifies where the nodes are located geographically. A closer region might provide faster performance, but a more distant region might be more distributed.
- Master pool: There are three Master nodes that are created for your cluster in the master pool. The master nodes share the management of the cluster activity. You can select multiple zones within the region for a more distributed group of master nodes. You can change the type and size of your instance after it is created, but you can also specify it in this section. The default values are Standard_D2s_v3 - 2 vCPU, 8 GiB RAM - General Purpose with 512 GiB of root storage.
- Worker pools: You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools.
- Optional: Configure the cluster networking options.
- Optional: Configure a label for the cluster.
Click Create. You can view your cluster details after the create and import process is complete.
注意You do not have to run the
kubectlcommand that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of Red Hat Advanced Cluster Management for Kubernetes.
7.2.3. Accessing your cluster 复制链接链接已复制到粘贴板!
To access a cluster that is managed by Red Hat Advanced Cluster Management for Kubernetes, complete the following steps:
- From the Red Hat Advanced Cluster Management for Kubernetes navigation menu, navigate to Automate infrastructure > Clusters.
- Select the name of the cluster that you created or want to access. The cluster details are displayed.
- Select Reveal credentials to view the user name and password for the cluster. Note these values to use when you log in to the cluster.
- Select Console URL to link to the cluster.
- Log in to the cluster by using the user ID and password that you found in step 3.
- Select the Actions menu for the cluster that you want to access.
Select Launch to cluster.
提示If you already know the log in credentials, you can access the cluster by selecting the Actions menu for the cluster, and selecting Launch to cluster.
7.2.4. Removing a cluster from management 复制链接链接已复制到粘贴板!
When you remove a Red Hat OpenShift Container Platform cluster from management that was created with Red Hat Advanced Cluster Management for Kubernetes, you can either Detach it or Destroy it.
Detaching a cluster removes it from management, but does not completely delete it. You can import it again, if you decide that you want to bring it back under management. This is only an option when the cluster is in a Ready state.
Destroying a cluster removes it from management and deletes the components of the cluster. This is permanent, and it cannot be brought back under management after deletion.
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- Select the option menu beside the cluster that you want to delete.
Select Destroy cluster or Detach cluster.
提示You can detach or destroy multiple clusters by selecting the check boxes of the clusters that you want to detach or destroy. Then select Detach or Destroy.
7.3. Creating a cluster on Google Cloud Platform 复制链接链接已复制到粘贴板!
Follow the procedure to create a Red Hat OpenShift Container Platform cluster on Google Cloud Platform (GCP). For more information about Google Cloud Platform, see Google Cloud Platform.
7.3.1. Prerequisites 复制链接链接已复制到粘贴板!
You must have the following prerequisites before creating a cluster on GCP:
- A deployed Red Hat Advanced Cluster Management for Kubernetes hub cluster
- Internet access for your Red Hat Advanced Cluster Management for Kubernetes hub cluster so it can create the Kubernetes cluster on GCP
- GCP provider connection. See Creating a a provider connection for Google Cloud Platform for more information.
- A configured domain in GCP. See Setting up a custom domain for instructions on how to configure a domain.
- GCP login credentials, which include user name and password.
- A Red Hat OpenShift image pull secret. See Using image pull secrets.
Note: If you change your cloud provider access key, you must manually update the provisioned cluster access key. For more information, see the known issue, Automatic secret updates for provisioned clusters is not supported.
To create clusters from the Red Hat Advanced Cluster Management for Kubernetes console, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- On the Clusters page, Click Add Cluster.
Select Create a cluster.
注意This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Importing a target managed cluster to the hub cluster for those steps.
Enter a name for your cluster. There are some restrictions that apply to naming your GCP cluster. These restrictions include not beginning the name with
googor containing a group of letters and numbers that resemblegoogleanywhere in the name. See Bucket naming guidelines for the complete list of restrictions.This name is used in the hostname of the cluster.
提示You can view the
yamlcontent updates as you enter the information in the console by setting the YAML switch to ON.- Select Google Cloud for the infrastructure platform.
- Specify a Release image that you want to use for the cluster. This identifies the version of the Red Hat OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the url to the image that you want to use. See Release images for more information about release images.
- Select your provider connection from the available connections on the list. If you do not have one configured, or want to configure a new one, select Add connection. See Creating a provider connection for Google Cloud Platform for more information about creating a provider connection.
- Enter the base domain information that you configured for your Google Cloud Platform account. If there is already a base domain associated with the selected provider connection, that value is populated in that field. You can change the value by overwriting it. See Setting up a custom domain for more information. This name is used in the hostname of the cluster.
- Add the Labels that you want to associate with your cluster. These labels help to identify the cluster and limit search results.
Configure the Node pools for your cluster.
The node pools define the location and size of the nodes that are used for your cluster.
The Region specifies where the nodes are located geographically. A closer region might provide faster performance, but a more distant region might be more distributed.
- Master pool: There are three Master nodes that are created for your cluster in the master pool. The master nodes share the management of the cluster activity. You can select multiple zones within the region for a more distributed group of master nodes. You can change the type and size of your instance after it is created, but you can also specify it in this section. The default values are n1-standard-1 - n1-standard-1 1 vCPU - General Purpose with 500 GiB of root storage.
- Worker pools: You can create one or more worker nodes in a worker pool to run the container workloads for the cluster. They can be in a single worker pool, or distributed across multiple worker pools.
- Optional: Configure the cluster networking options.
- Optional: Configure a label for the cluster.
- Click Create.
You can view your cluster details after the create and import process is complete.
+ NOTE: You do not have to run the kubectl command that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of Red Hat Advanced Cluster Management for Kubernetes.
7.3.3. Accessing your cluster 复制链接链接已复制到粘贴板!
To access a cluster that is managed by Red Hat Advanced Cluster Management for Kubernetes, complete the following steps:
- From the Red Hat Advanced Cluster Management for Kubernetes navigation menu, navigate to Automate infrastructure > Clusters.
- Select the name of the cluster that you created or want to access. The cluster details are displayed.
- Select Reveal credentials to view the user name and password for the cluster. Note these values to use when you log in to the cluster.
- Select Console URL to link to the cluster.
- Log in to the cluster by using the user ID and password that you found in step 3.
- Select the Actions menu for the cluster that you want to access.
Select Launch to cluster.
提示If you already know the log in credentials, you can access the cluster by selecting the Actions menu for the cluster, and selecting Launch to cluster.
7.3.4. Removing a cluster from management 复制链接链接已复制到粘贴板!
When you remove a Red Hat OpenShift Container Platform cluster from management that was created with Red Hat Advanced Cluster Management for Kubernetes, you can either detach it or destroy it.
Detaching a cluster removes it from management, but does not completely delete it. You can import it again, if you decide that you want to bring it back under management. This is only an option when the cluster is in a Ready state.
Destroying a cluster removes it from management and deletes the components of the cluster. This is permanent, and it cannot be brought back under management after deletion.
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- Select the option menu beside the cluster that you want to delete.
Select Destroy cluster or Detach cluster.
提示You can detach or destroy multiple clusters by selecting the check boxes of the clusters that you want to detach or destroy. Then select Detach or Destroy.
7.4. Creating a cluster on bare metal 复制链接链接已复制到粘贴板!
Important: The bare metal cluster function is a technology preview,and should not be used in production environments.
You can use the Red Hat Advanced Cluster Management for Kubernetes console to create a Red Hat OpenShift Container Platform cluster in a bare metal environment.
The options for bare metal in the console are a technology preview only, and are hidden by a feature flag by default. See the instructions for enabling the feature flag in the Prerequisites section.
7.4.1. Prerequisites 复制链接链接已复制到粘贴板!
You need the following prerequisites before creating a cluster in a bare metal environment:
- A deployed Red Hat Advanced Cluster Management for Kubernetes hub cluster on OpenShift Container Platform version 4.5, or later.
- Internet access for your Red Hat Advanced Cluster Management for Kubernetes hub cluster so it can create the Kubernetes cluster in the bare metal environment
- Bare metal provider connection; see Creating a provider connection for bare metal for more information
- Login credentials for your bare metal environment, which include user name, password, and Baseboard Management Controller Address
- A Red Hat OpenShift Container Platform image pull secret; see Using image pull secrets
Bare metal feature flags that are enabled to view the bare metal options. The bare metal options are hidden by feature flags by default. Complete the following steps to enable the feature flags:
- Start the Red Hat OpenShift Container Platform command line interface.
Set the
featureFlags_baremetalsetting to true for theconsole-headercontainer by entering the following command:oc patch deploy console-header -n <namespace> -p '{"spec":{"template":{"spec":{"containers":[{"name":"console-header","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'oc patch deploy console-header -n <namespace> -p '{"spec":{"template":{"spec":{"containers":[{"name":"console-header","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <namespace> with your Red Hat Advanced Cluster Management project namespace.
After the update, your
consoleuiCRD should look like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
featureFlags_baremetalvalue to true for thehmc-uicontainer:oc patch -n <namespace> $(oc get deploy -o name | grep consoleui) -p '{"spec":{"template":{"spec":{"containers":[{"name":"hcm-ui","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'oc patch -n <namespace> $(oc get deploy -o name | grep consoleui) -p '{"spec":{"template":{"spec":{"containers":[{"name":"hcm-ui","env": [{"name": "featureFlags_baremetal","value":"true"}]}]}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <namespace> with your Red Hat Advanced Cluster Management project namespace.
Your update should look like the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure the
console-chart-...-consoleui...andconsole-header-...pods are running:oc -n open-cluster-management get pods
oc -n open-cluster-management get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - When the pods are running again, log out of the Red Hat Advanced Cluster Management for Kubernetes console and log back in. The bare metal options are now included in the console.
Note: If you change your cloud provider access key, you must manually update the provisioned cluster access key. For more information, see the known issue, Automatic secret updates for provisioned clusters is not supported.
To create clusters from the Red Hat Advanced Cluster Management for Kubernetes console, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- On the Clusters page, Click Add Cluster.
Select Create a cluster.
注意This procedure is for creating a cluster. If you have an existing cluster that you want to import, see Importing a target managed cluster to the hub cluster for those steps.
Enter a name for your cluster. This name is used in the hostname of the cluster.
提示You can view the
yamlcontent updates as you enter the information in the console by setting the YAML switch to ON.- Select Bare Metal for the infrastructure platform.
- Specify a Release image that you want to use for the cluster. This identifies the version of the Red Hat OpenShift Container Platform image that is used to create the cluster. If the version that you want to use is available, you can select the image from the list of images. If the image that you want to use is not a standard image, you can enter the url to the image that you want to use. See Release images for more information about release images.
- Select your provider connection from the available connections on the list. If you do not have one configured, or want to configure a new one, select Add provider. See Creating a provider connection for bare metal for more information about creating a provider connection.
- Enter the base domain information that you configured in your bare metal environment. If there is already a base domain associated with the selected provider connection, that value is populated in that field. You can change the value by overwriting it. This name is used in the hostname of the cluster.
- Select your hosts from the list of hosts that are associated with your provider connection. Select a minimum of three assets that are on the same bridge networks as the hypervisor.
- Optional: Configure the cluster networking options.
- Optional: Configure a label for the cluster.
- Optional: Update the advanced settings, if you want to change the setting for including a configmap.
Click Create. You can view your cluster details after the create and import process is complete.
注意You do not have to run the
kubectlcommand that is provided with the cluster details to import the cluster. When you create the cluster, it is automatically configured under the management of Red Hat Advanced Cluster Management for Kubernetes.
7.4.3. Accessing your cluster 复制链接链接已复制到粘贴板!
To access a cluster that is managed by Red Hat Advanced Cluster Management for Kubernetes, complete the following steps:
- From the Red Hat Advanced Cluster Management for Kubernetes navigation menu, navigate to Automate infrastructure > Clusters.
- Select the name of the cluster that you created or want to access. The cluster details are displayed.
- Select Reveal credentials to view the user name and password for the cluster. Note these values to use when you log in to the cluster.
- Select Console URL to link to the cluster.
- Log in to the cluster by using the user ID and password that you found in step 3.
- Select the Actions menu for the cluster that you want to access.
Select Launch to cluster.
提示If you already know the log in credentials, you can access the cluster by selecting the Actions menu for the cluster, and selecting Launch to cluster.
7.4.4. Removing a cluster from management 复制链接链接已复制到粘贴板!
When you remove a Red Hat OpenShift Container Platform cluster from management that was created with Red Hat Advanced Cluster Management for Kubernetes, you can either detach it or destroy it.
Detaching a cluster removes it from management, but does not completely delete it. You can import it again, if you decide that you want to bring it back under management. This is only an option when the cluster is in a Ready state.
Destroying a cluster removes it from management and deletes the components of the cluster. This is permanent, and it cannot be brought back under management after deletion.
- From the navigation menu, navigate to Automate infrastructure > Clusters.
- Select the option menu beside the cluster that you want to delete.
Select Destroy cluster or Detach cluster.
提示You can detach or destroy multiple clusters by selecting the check boxes of the clusters that you want to detach or destroy. Then select Detach or Destroy.
You can import clusters from different Kubernetes cloud providers. After you import, the targeted cluster becomes a managed cluster for the Red Hat Advanced Cluster Management for Kubernetes hub cluster. Unless otherwise specified, complete the import tasks anywhere where you can access the hub cluster and the targeted managed cluster.
A hub cluster cannot manage any other hub cluster; you must import an existing cluster.
Choose from the following instructions to set up your managed cluster, either from the console or from the CLI:
Required user type or access level: Cluster administrator
8.1. Importing an existing cluster with the console 复制链接链接已复制到粘贴板!
After you install Red Hat Advanced Cluster Management for Kubernetes, you are ready to import a cluster to manage. You can import from both the console and the CLI. Follow this procedure to import from the console. You need your terminal for authentication during this procedure.
8.1.1. Prerequisites 复制链接链接已复制到粘贴板!
- You need a Red Hat Advanced Cluster Management for Kubernetes hub cluster that is deployed. If you are importing bare metal clusters, you must have the hub cluster installed on Red Hat OpenShift Container Platform version 4.4, or later.
- You need a cluster that you want to manage and Internet connectivity.
-
Install
kubectl. To installkubectl, see Install and Set Up kubectl in the Kubernetes documentation.
-
You need the
base64command line tool.
Required user type or access level: Cluster administrator
8.1.2. Importing a cluster 复制链接链接已复制到粘贴板!
You can import existing clusters from the Red Hat Advanced Cluster Management for Kubernetes console for each of the available cloud providers.
A hub cluster cannot manage any other hub cluster; you must import an existing cluster.
- From the navigation menu, hover over Automate infrastructure and click Clusters.
- Click Add cluster.
- Click Import an existing cluster.
- Provide a cluster name. By default, the namespace is set to the same value as your cluster name. Best practice: Leave the namespace value and do not edit.
Optional: Click to expand Edit cluster import YAML file and modify the endpoint configuration.
See Table 1. YAML file parameters and descriptions for details about each parameter.
- Optional: After you import, you can add labels by clicking Configure advanced parameters and use these labels to search.
Optional: Configure the
MANAGED CLUSTER URLS. By configuring theMANAGED CLUSTER URLS, the URLs display in the table when you run theoc get managedclustercommand.-
If it is not already on, turn on the
YAMLcontent using the switch in the web console so you can view the content. Add the
manageClusterClientConfigssection to theManagedClusterspec in theimport.yamlfile, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the URL value is the external access URL address of the managed cluster.
-
If it is not already on, turn on the
-
Click Generate Command to retrieve the command to deploy the
open-cluster-management-agent-addon. - From the Import an existing cluster window, hover and click the Copy command icon to copy the import command and the token that you are provided. You must click the Copy icon to receive the accurate copy. Important: The command contains pull secret information that is copied to each of the imported clusters. Anyone who can access the imported clusters can also view the pull secret information. Consider creating a secondary pull secret at https://cloud.redhat.com/ or by creating a service account so your personal credentials are not compromised. See Using image pull secrets or Understanding and creating service accounts for more information.
From your terminal, authenticate to your managed cluster. Configure your
kubectlfor your targeted managed cluster.See Supported clouds to learn how to configure your
kubectl.-
To deploy the
open-cluster-management-agent-addonto the managed cluster, run the command that you generated and copied from step 8. - Click View cluster to view the Overview page and a summary of your cluster.
Note You can continue to import more clusters. Click Import another to repeat the process.
8.1.2.1. YAML parameters and descriptions 复制链接链接已复制到粘贴板!
Table 1: The following table lists the parameters and descriptions that are available in the YAML file:
| Parameter | Description | Default value |
|---|---|---|
| clusterLabels | Provide cluster labels; you can add labels to your file | none |
| clusterLabels.cloud | The provider label for your cluster | auto-detect |
| clusterLabels.vendor | The Kubernetes vendor label for your cluster | auto-detect |
| clusterLabels.environment | The environment label for your cluster | none |
| clusterLabels.region | The region where your cluster is set up | none |
| applicationManager.enabled | Enables multicluster manager application deployment, deploys subscription controller and deployable controller | true |
| searchCollector.enabled | Enables search collection and indexing | true |
| policyController.enabled | Enable the Governance and risk dashboard policy feature | true, updateInterval: 15 |
| certPolicyController.enabled | Monitors certificate expiration based on distributed policies | true |
| iamPolicyController | Monitors identity controls based on distributed policies | true |
| serviceRegistry.enabled | Service registry that is used to discover services that are deployed by Application Deployable among managed clusters. | false |
| serviceRegistry.dnsSuffix | The suffix of the registry DNS name, which is added to the end of the target clusters dns domain name. | mcm.svc |
| serviceRegistry.plugins |
Comma-separated list of enabled plugins. Supported plugins: | kube-service |
| version |
Version of | 2.0 |
8.1.3. Removing an imported cluster 复制链接链接已复制到粘贴板!
Complete the following procedure to remove an imported cluster and the open-cluster-management-agent-addon that was created on the managed cluster.
- From the Clusters page, find your imported cluster in the table.
- Click Options > Detach cluster to remove your cluster from management.
8.2. Importing a managed cluster with the CLI 复制链接链接已复制到粘贴板!
After you install Red Hat Advanced Cluster Management for Kubernetes, you are ready to import a cluster to manage. You can import from both the console and the CLI. Follow this procedure to import from the CLI.
A hub cluster cannot manage another hub cluster.
8.2.1. Prerequisites 复制链接链接已复制到粘贴板!
- You need a Red Hat Advanced Cluster Management for Kubernetes hub cluster that is deployed. If you are importing bare metal clusters, you must have the hub cluster installed on Red Hat OpenShift Container Platform version 4.4, or later. Important: The bare metal function is a technology preview, and should not be used in production enviromnents.
- You need a separate cluster that you want to manage and Internet connectivity.
-
You need the Red Hat OpenShift Container Platform CLI version 4.3, or later, to run
occommands. See Getting started with the CLI for information about installing and configuring the Red Hat OpenShift CLI,oc. You need to install the Kubernetes CLI,
kubectl. To installkubectl, see Install and Set Up kubectl in the Kubernetes documentation.注意Download the installation file for CLI tools from the console.
8.2.2. Supported architecture 复制链接链接已复制到粘贴板!
- Linux
- macOS
8.2.3. Prepare for import 复制链接链接已复制到粘贴板!
Log in to your hub cluster. Run the following command:
oc login
oc loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command on the hub cluster to create the namespace. Note: The cluster name that is defined in
<cluster_name>is also used as the cluster namespace in the.yamlfile file and commands:oc new-project ${CLUSTER_NAME} oc label namespace ${CLUSTER_NAME} cluster.open-cluster-management.io/managedCluster=${CLUSTER_NAME}oc new-project ${CLUSTER_NAME} oc label namespace ${CLUSTER_NAME} cluster.open-cluster-management.io/managedCluster=${CLUSTER_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the example ManagedCluster with the following sample of YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the file as
managed-cluster.yaml. Apply the YAML file with the following command:
oc apply -f managed-cluster.yaml
oc apply -f managed-cluster.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the klusterlet addon configuration file. Enter the following example YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the file as
klusterlet-addon-config.yaml. Apply the YAML. Run the following command:
oc apply -f klusterlet-addon-config.yaml
oc apply -f klusterlet-addon-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The ManagedCluster-Import-Controller will generate a secret named ${CLUSTER_NAME}-import. The ${CLUSTER_NAME}-import secret contains the import.yaml that the user applies to a managed cluster to install klusterlet.
8.2.4. Importing the klusterlet 复制链接链接已复制到粘贴板!
The import command contains pull secret information that is copied to each of the imported clusters. Anyone who can access the imported clusters can also view the pull secret information.
Obtain the
klusterlet-crd.yamlthat was generated by the managed cluster import controller.Run the following command:
oc get secret ${CLUSTER_NAME}-import -n ${CLUSTER_NAME} -o jsonpath={.data.crds\\.yaml} | base64 --decode > klusterlet-crd.yamloc get secret ${CLUSTER_NAME}-import -n ${CLUSTER_NAME} -o jsonpath={.data.crds\\.yaml} | base64 --decode > klusterlet-crd.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the
import.yamlthat was generated by the managed cluster import controller. Run the following command:oc get secret ${CLUSTER_NAME}-import -n ${CLUSTER_NAME} -o jsonpath={.data.import\\.yaml} | base64 --decode > import.yamloc get secret ${CLUSTER_NAME}-import -n ${CLUSTER_NAME} -o jsonpath={.data.import\\.yaml} | base64 --decode > import.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to your target managed cluster.
Apply the
klusterlet-crd.yamlthat was generated in step 1. Run the following command:kubectl apply -f klusterlet-crd.yaml
kubectl apply -f klusterlet-crd.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
import.yamlfile that was generated in step 2. Run the following command:kubectl apply -f import.yaml
kubectl apply -f import.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate the pod status on the target managed cluster. Run the following command:
kubectl get pod -n open-cluster-management-agent
kubectl get pod -n open-cluster-management-agentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate
JOINEDandAVAILABLEstatus for your imported cluster. Run the following command from the hub cluster:kubectl get managedcluster -n ${CLUSTER_NAME}kubectl get managedcluster -n ${CLUSTER_NAME}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Addons will be installed after the managed cluster is
AVAILABLE. Validate the pod status of addons on the target managed cluster. Run the following command:kubectl get pod -n open-cluster-management-agent-addon
kubectl get pod -n open-cluster-management-agent-addonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You can modify the settings of klusterlet addon to change your configuration using the hub cluster.
The klusterlet addon controller manages the functions that are enabled and disabled according to the settings in the klusterletaddonconfigs.agent.open-cluster-management.io Kubernetes resource.
The following settings can be updated in the klusterletaddonconfigs.agent.open-cluster-management.io Kubernetes resource:
| Setting name | Value |
|---|---|
| applicationmanager |
|
| policyController |
|
| searchCollector |
|
| certPolicyController |
|
| iamPolicyController |
|
8.3.1. Modify using the console on the hub cluster 复制链接链接已复制到粘贴板!
You can modify the settings of the klusterletaddonconfigs.agent.open-cluster-management.io resource by using the hub cluster. Complete the following steps to change the settings:
- Authenticate into the Red Hat Advanced Cluster Management for Kubernetes console of the hub cluster.
- From the main menu of the hub cluster console, select Search.
-
In the search parameters, enter the following value:
kind:klusterletaddonconfigs - Select the endpoint resource that you want to update.
-
Find the
specsection and select Edit to edit the content. - Modify your settings.
- Select Save to apply your changes.
8.3.2. Modify using the command line on the hub cluster 复制链接链接已复制到粘贴板!
You must have access to the <cluster-name> namespace to modify your settings by using the hub cluster. Complete the following steps:
- Authenticate into the hub cluster.
Enter the following command to edit the resource:
kubectl edit klusterletaddonconfigs.agent.open-cluster-management.io <cluster-name> -n <cluster-name>
kubectl edit klusterletaddonconfigs.agent.open-cluster-management.io <cluster-name> -n <cluster-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Find the
specsection. - Modify your settings, as necessary.
第 9 章 Upgrading your cluster 复制链接链接已复制到粘贴板!
After you create clusters that you want to manage with Red Hat Advanced Cluster Management for Kubernetes, you can use the Red Hat Advanced Cluster Management console to upgrade those clusters to the latest minor version that is available in the version channel that the managed cluster uses.
To upgrade to a major version, you must verify that you meet all of the prerequisites for upgrading to that version. You must update the version channel on the managed cluster before you can upgrade the cluster with the console. After you update the version channel on the managed cluster, the Red Hat Advanced Cluster Management console displays the latest versions that are available for the upgrade.
Note: You cannot upgrade Red Hat OpenShift Kubernetes Service clusters with the Red Hat Advanced Cluster Management for Kubernetes console.
This method of upgrading only works for Red Hat OpenShift Container Platform clusters that are in a Ready state.
To upgrade your cluster, complete the following steps:
- From the navigation menu, navigate to Automate infrastructure > Clusters. If an upgrade is available, it is shown in the Distribution version column.
- Select the clusters that you want to upgrade. Note: A cluster must be in Ready state, and must be an OpenShift Container Platform cluster to be upgraded with the console.
- Select Upgrade.
- Select the new version of each cluster.
- Select Upgrade.
9.1. Upgrading disconnected clusters 复制链接链接已复制到粘贴板!
You can use Red Hat OpenShift Update Service with Red Hat Advanced Cluster Management for Kubernetes to upgrade your clusters in a disconnected environment.
Important: Red Hat OpenShift Update Service is a Red Hat OpenShift Container Platform Operator that is provided as a technical preview with OpenShift Container Platform 4.4. It is not intended for use in a production environment.
In some cases, security concerns prevent clusters from being connected directly to the Internet. This makes it difficult to know when upgrades are available, and how to process those upgrades. Configuring OpenShift Update Service can help.
OpenShift Update Service is a separate operator and operand that monitors the available versions of your managed clusters in a disconnected environment, and makes them available for upgrading your clusters in a disconnected environment. After OpenShift Update Service is configured, it can perform the following actions:
- Monitor when upgrades are available for your disconnected clusters.
- Identify which updates are mirrored to your local site for upgrading by using the graph data file.
- Notify you that an upgrade is available for your cluster by using the Red Hat Advanced Cluster Management console.
9.1.1. Prerequisites 复制链接链接已复制到粘贴板!
You must have the following prerequisites before you can use OpenShift Update Service to upgrade your disconnected clusters:
A deployed Red Hat Advanced Cluster Management hub cluster that is running on Red Hat OpenShift Container Platform version 4.5, or later with restricted OLM configured. See Using Operator Lifecycle Manager on restricted networks for details about how to configure restricted OLM.
Tip: Make a note of the catalog source image when you configure restricted OLM.
- An OpenShift Container Platform cluster that is managed by the Red Hat Advanced Cluster Management hub cluster
Access credentials to a local repository where you can mirror the cluster images. See Creating a mirror registry for installation in a restricted network for more information about how to create this repository.
Note: The image for the current version of the cluster that you upgrade must always be available as one of the mirrored images. If an upgrade fails, the cluster reverts back to the version of the cluster at the time that the upgrade was attempted.
9.1.2. Prepare your disconnected mirror registry 复制链接链接已复制到粘贴板!
You must mirror both the image that you want to upgrade to and the current image that you are upgrading from to your local mirror registry. Complete the following steps to mirror the images:
Create a script file that contains content that resembles the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
/path/to/pull/secretwith the path to your OpenShift Container Platform pull secret.- Run the script to mirror the images, configure settings, and separate the release images from the release content.
Tip: You can use the output of the last line of this script when you create your ImageContentSourcePolicy.
9.1.3. Deploy the operator for OpenShift Update Service 复制链接链接已复制到粘贴板!
To deploy the operator for OpenShift Update Service in your OpenShift Container Platform environment, complete the following steps:
- On the hub cluster, access the OpenShift Container Platform operator hub.
-
Deploy the operator by selecting
Red Hat OpenShift Update Service Operator. Update the default values, if necessary. The deployment of the operator creates a new project namedopenshift-cincinnati. Wait for the installation of the operator to finish.
Tip: You can check the status of the installation by entering the
oc get podscommand on your OpenShift Container Platform command line. Verify that the operator is in therunningstate.
9.1.4. Build the graph data init container 复制链接链接已复制到粘贴板!
OpenShift Update Service uses graph data information to determine the available upgrades. In a connected environment, OpenShift Update Service pulls the graph data information for available upgrades directly from the Cincinnati graph data GitHub repository. Because you are configuring a disconnected environment, you must make the graph data available in a local repository by using an init container. Complete the following steps to create a graph data init container:
Clone the graph data Git repository by entering the following command:
git clone https://github.com/openshift/cincinnati-graph-data
git clone https://github.com/openshift/cincinnati-graph-dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file that contains the information for your graph data
init. You can find this sample Dockerfile in thecincinnati-operatorGitHub repository. The contents of the file is shown in the following sample:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example:
-
The
FROMvalue is the external registry where OpenShift Update Service finds the images. -
The
RUNcommands create the directory and package the upgrade files. -
The
CMDcommand copies the package file to the local repository and extracts the files for an upgrade.
-
The
Run the following commands to build the
graph data init container:podman build -f <path_to_Dockerfile> -t ${DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container:latest podman push ${DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container:latest --authfile=/path/to/pull_secret.jsonpodman build -f <path_to_Dockerfile> -t ${DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container:latest podman push ${DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container:latest --authfile=/path/to/pull_secret.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace path_to_Dockerfile with the path to the file that you created in the previous step.
Replace ${DISCONNECTED_REGISTRY}/cincinnati/cincinnati-graph-data-container with the path to your local graph data init container.
Replace /path/to/pull_secret with the path to your pull secret file.
Note: You can also replace
podmanin the commands withdocker, if you don’t havepodmaninstalled.
9.1.5. Configure certificate for the mirrored registry 复制链接链接已复制到粘贴板!
If you are using a secure external container registry to store your mirrored OpenShift Container Platform release images, OpenShift Update Service requires access to this registry to build an upgrade graph. Complete the following steps to configure your CA certificate to work with the OpenShift Update Service pod:
Find the OpenShift Container Platform external registry API, which is located in
image.config.openshift.io. This is where the external registry CA certificate is stored.See Image Registry Operator in OpenShift Container Platform in the OpenShift Container Platform documentation for more information.
-
Create a ConfigMap in the
openshift-confignamespace. Add your CA certificate under the key
cincinnati-registry. OpenShift Update Service uses this setting to locate your certificate:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
clusterresource in theimage.config.openshift.ioAPI to set theadditionalTrustedCAfield to the name of the ConfigMap that you created.oc patch image.config.openshift.io cluster -p '{"spec":{"additionalTrustedCA":{"name":"trusted-ca"}}}' --type mergeoc patch image.config.openshift.io cluster -p '{"spec":{"additionalTrustedCA":{"name":"trusted-ca"}}}' --type mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace trusted-ca with the path to your new ConfigMap.
The OpenShift Update Service Operator watches the image.config.openshift.io API and the ConfigMap you created in the openshift-config namespace for changes, then restart the deployment if the CA cert has changed.
9.1.6. Deploy the OpenShift Update Service instance 复制链接链接已复制到粘贴板!
When you finish deploying the OpenShift Update Service instance on your hub cluster, this instance is located where the images for the cluster upgrades are mirrored and made available to the disconnected managed cluster. Complete the following steps to deploy the instance:
If you do not want to use the default namespace of the operator, which is
openshift-cincinnati, create a namespace for your OpenShift Update Service instance:- In the OpenShift Container Platform hub cluster console navigation menu, select Administration > Namespaces.
- Select Create Namespace.
- Add the name of your namespace, and any other information for your namespace.
- Select Create to create the namespace.
- In the Installed Operators section of the OpenShift Container Platform console, select Red Hat OpenShift Update Service Operator.
- Select Create Instance in the menu.
Paste the contents from your OpenShift Update Service instance. Your YAML instance might resemble the following manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the
spec.registryvalue with the path to your local disconnected registry for your images.Replace the
spec.graphDataImagevalue with the path to your graph data init container. Tip: This is the same value that you used when you ran thepodman pushcommand to push your graph data init container.- Select Create to create the instance.
-
From the hub cluster CLI, enter the
oc get podscommand to view the status of the instance creation. It might take a while, but the process is complete when the result of the command shows that the instance and the operator are running.
Note: The steps in this section only apply if you have mirrored your releases into your mirrored registry.
OpenShift Container Platform has a default image registry value that specifies where it finds the upgrade packages. In a disconnected environment, you can create a policy to replace that value with the path to your local image registry where you mirrored your release images.
For these steps, the policy is named ImageContentSourcePolicy. Complete the following steps to create the policy:
- Log in to the OpenShift Container Platform environment of your hub cluster.
- In the OpenShift Container Platform navigation, select Administration > Custom Resource Definitions.
- Select the Instances tab.
- Select the name of the ImageContentSourcePolicy that you created when you set up your disconnected OLM to view the contents.
-
Select the YAML tab to view the content in
YAMLformat. - Copy the entire contents of the ImageContentSourcePolicy.
- From the Red Hat Advanced Cluster Management console, select Govern risk > Create policy.
-
Set the
YAMLswitch to On to view the YAML version of the policy. -
Delete all of the content in the
YAMLcode. Paste the following
YAMLcontent into the window to create a custom policy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the content inside the
objectDefinitionsection of the template with content that is similar to the following content to add the settings for your ImageContentSourcePolicy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace path-to-local-mirror with the path to your local mirror repository.
-
Tip: You can find your path to your local mirror by entering the
oc adm release mirrorcommand.
- Select the box for Enforce if supported.
- Select Create to create the policy.
Push the Catalogsource policy to the managed cluster to change the default location from a connected location to your disconnected local registry.
- In the Red Hat Advanced Cluster Management console, select Automate infrastructure > Clusters.
- Find the managed cluster to receive the policy in the list of clusters.
-
Note the value of the
namelabel for the managed cluster. The label format isname=managed-cluster-name. This value is used when pushing the policy. - In the Red Hat Advanced Cluster Management console menu, select Govern risk > Create policy.
-
Set the
YAMLswitch to On to view the YAML version of the policy. -
Delete all of the content in the
YAMLcode. -
Paste the following
YAMLcontent into the window to create a custom policy: Paste the following
YAMLcontent into the window to create a custom policy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following content to the policy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the value of spec.image with the path to your local restricted catalog source image.
-
In the Red Hat Advanced Cluster Management console navigation, select Automate infrastructure > Clusters to check the status of the managed cluster. When the policy is applied, the cluster status is
ready.
Push the ClusterVersion policy to the managed cluster to change the default location where it retrieves its upgrades.
From the managed cluster, confirm that the ClusterVersion upstream parameter is currently the default public OpenShift Update Service operand by entering the following command:
oc get clusterversion -o yaml
oc get clusterversion -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The returned content might resemble the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the hub cluster, identify the route URL to the OpenShift Update Service operand by entering the following command:
oc get routes.Tip: Note this value for later steps.
- In the hub cluster Red Hat Advanced Cluster Management console menu, select Govern risk > Create a policy.
-
Set the
YAMLswitch to On to view the YAML version of the policy. -
Delete all of the content in the
YAMLcode. Paste the following
YAMLcontent into the window to create a custom policy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following content to
policy.specin the policy section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the value of spec.upstream with the path to your hub cluster OpenShift Update Service operand.
Tip: You can complete the following steps to determine the path to the operand:
-
Run the
oc get get routes -Acommand on the hub cluster. -
Find the route to
cincinnati. + The path to the operand is the value in theHOST/PORTfield.
-
Run the
In the managed cluster CLI, confirm that the upstream parameter in the
ClusterVersionis updated with the local hub cluster OpenShift Update Service URL by entering:oc get clusterversion -o yaml
oc get clusterversion -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the results resemble the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.10. Viewing available upgrades 复制链接链接已复制到粘贴板!
You can view a list of available upgrades for your managed cluster by completing the following steps:
- Log in to your Red Hat Advanced Cluster Management console.
- In the navigation menu, select Automate Infrastructure > Clusters.
- Select a cluster that is in the Ready state.
- From the Actions menu, select Upgrade cluster.
Verify that the optional upgrade paths are available.
Note: No available upgrade versions are shown if the current version is not mirrored into the local image repository.
9.1.11. Upgrading the cluster 复制链接链接已复制到粘贴板!
After configuring the disconnected registry, Red Hat Advanced Cluster Management and OpenShift Update Service use the disconnected registry to determine if upgrades are available. If no available upgrades are displayed, make sure that you have the release image of the current level of the cluster and at least one later level mirrored in the local repository. If the release image for the current version of the cluster is not available, no upgrades are available.
Complete the following steps to upgrade:
- In the Red Hat Advanced Cluster Management console, select Automate infrastructure > Clusters.
- Find the cluster that you want to determine if there is an available upgrade.
- If there is an upgrade available, the Distribution version column for the cluster indicates that there is an upgrade available.
- Select the Options menu for the cluster, and select Upgrade cluster.
- Select the target version for the upgrade, and select Upgrade.
The managed cluster is updated to the selected version.