Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 6. Deploying hosted control planes in a disconnected environment
6.1. Introduction to hosted control planes in a disconnected environment
In the context of hosted control planes, a disconnected environment is an OpenShift Container Platform deployment that is not connected to the internet and that uses hosted control planes as a base. You can deploy hosted control planes in a disconnected environment on bare metal or OpenShift Virtualization.
Hosted control planes in disconnected environments function differently than in standalone OpenShift Container Platform:
- The control plane is in the management cluster. The control plane is where the pods of the hosted control plane are run and managed by the Control Plane Operator.
- The data plane is in the workers of the hosted cluster. The data plane is where the workloads and other pods run, all managed by the HostedClusterConfig Operator.
Depending on where the pods are running, they are affected by the ImageDigestMirrorSet
(IDMS) or ImageContentSourcePolicy
(ICSP) that is created in the management cluster or by the ImageContentSource
that is set in the spec
field of the manifest for the hosted cluster. The spec
field is translated into an IDMS object on the hosted cluster.
You can deploy hosted control planes in a disconnected environment on IPv4, IPv6, and dual-stack networks. IPv4 is one of the simplest network configurations to deploy hosted control planes in a disconnected environment. IPv4 ranges require fewer external components than IPv6 or dual-stack setups. For hosted control planes on OpenShift Virtualization in a disconnected environment, use either an IPv4 or a dual-stack network.
Hosted control planes in a disconnected environment on a dual-stack network is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
6.2. Deploying hosted control planes on OpenShift Virtualization in a disconnected environment
When you deploy hosted control planes in a disconnected environment, some of the steps differ depending on the platform you use. The following procedures are specific to deployments on OpenShift Virtualization.
6.2.1. Prerequisites
- You have a disconnected OpenShift Container Platform environment serving as your management cluster.
- You have an internal registry to mirror images on. For more information, see About disconnected installation mirroring.
6.2.2. Configuring image mirroring for hosted control planes in a disconnected environment
Image mirroring is the process of fetching images from external registries, such as registry.redhat.com
or quay.io
, and storing them in your private registry.
In the following procedures, the oc-mirror
tool is used, which is a binary that uses the ImageSetConfiguration
object. In the file, you can specify the following information:
-
The OpenShift Container Platform versions to mirror. The versions are in
quay.io
. - The additional Operators to mirror. Select packages individually.
- The extra images that you want to add to the repository.
Prerequisites
Ensure that the registry server is running before you start the mirroring process.
Procedure
To configure image mirroring, complete the following steps:
-
Ensure that your
${HOME}/.docker/config.json
file is updated with the registries that you are going to mirror from and with the private registry that you plan to push the images to. By using the following example, create an
ImageSetConfiguration
object to use for mirroring. Replace values as needed to match your environment:apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: registry.<dns.base.domain.name>:5000/openshift/release/metadata:latest 1 mirror: platform: channels: - name: candidate-4.17 minVersion: 4.x.y-build 2 maxVersion: 4.x.y-build 3 type: ocp kubeVirtContainer: true 4 graph: true additionalImages: - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.17 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5
- 1
- Replace
<dns.base.domain.name>
with the DNS base domain name. - 2 3
- Replace
4.x.y-build
with the supported OpenShift Container Platform version you want to use. - 4
- Set this optional flag to
true
if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. - 5
- For deployments that use the KubeVirt provider, include this line.
Start the mirroring process by entering the following command:
$ oc-mirror --v2 --config imagesetconfig.yaml docker://${REGISTRY}
After the mirroring process is finished, you have a new folder named
oc-mirror-workspace/results-XXXXXX/
, which contains the IDMS and the catalog sources to apply on the hosted cluster.Mirror the nightly or CI versions of OpenShift Container Platform by configuring the
imagesetconfig.yaml
file as follows:apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:4.x.y-build 1 kubeVirtContainer: true 2 # ...
- 1
- Replace
4.x.y-build
with the supported OpenShift Container Platform version you want to use. - 2
- Set this optional flag to
true
if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only.
Apply the changes to the file by entering the following command:
$ oc-mirror --v2 --config imagesetconfig.yaml docker://${REGISTRY}
- Mirror the latest multicluster engine Operator images by following the steps in Install on disconnected networks.
6.2.3. Applying objects in the management cluster
After the mirroring process is complete, you need to apply two objects in the management cluster:
-
ImageContentSourcePolicy
(ICSP) orImageDigestMirrorSet
(IDMS) - Catalog sources
When you use the oc-mirror
tool, the output artifacts are in a folder named oc-mirror-workspace/results-XXXXXX/
.
The ICSP or IDMS initiates a MachineConfig
change that does not restart your nodes but restarts the kubelet on each of them. After the nodes are marked as READY
, you need to apply the newly generated catalog sources.
The catalog sources initiate actions in the openshift-marketplace
Operator, such as downloading the catalog image and processing it to retrieve all the PackageManifests
that are included in that image.
Procedure
To check the new sources, run the following command by using the new
CatalogSource
as a source:$ oc get packagemanifest
To apply the artifacts, complete the following steps:
Create the ICSP or IDMS artifacts by entering the following command:
$ oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml
Wait for the nodes to become ready, and then enter the following command:
$ oc apply -f catalogSource-XXXXXXXX-index.yaml
Mirror the OLM catalogs and configure the hosted cluster to point to the mirror.
When you use the
management
(default) OLMCatalogPlacement mode, the image stream that is used for OLM catalogs is not automatically amended with override information from the ICSP on the management cluster.-
If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the
hypershift.openshift.io/olm-catalogs-is-registry-overrides
annotation to theHostedCluster
resource. The format is"sr1=dr1,sr2=dr2"
, where the source registry string is a key and the destination registry is a value. To bypass the OLM catalog image stream mechanism, use the following four annotations on the
HostedCluster
resource to directly specify the addresses of the four images to use for OLM Operator catalogs:-
hypershift.openshift.io/certified-operators-catalog-image
-
hypershift.openshift.io/community-operators-catalog-image
-
hypershift.openshift.io/redhat-marketplace-catalog-image
-
hypershift.openshift.io/redhat-operators-catalog-image
-
-
If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the
In this case, the image stream is not created, and you must update the value of the annotations when the internal mirror is refreshed to pull in Operator updates.
Next steps
Deploy the multicluster engine Operator by completing the steps in Deploying multicluster engine Operator for a disconnected installation of hosted control planes.
Additional resources
6.2.4. Deploying multicluster engine Operator for a disconnected installation of hosted control planes
The multicluster engine for Kubernetes Operator plays a crucial role in deploying clusters across providers. If you do not have multicluster engine Operator installed, review the following documentation to understand the prerequisites and steps to install it:
6.2.5. Configuring TLS certificates for a disconnected installation of hosted control planes
To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster.
6.2.5.1. Adding the registry CA to the management cluster
To add the registry CA to the management cluster, complete the following steps.
Procedure
Create a config map that resembles the following example:
apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----
- 1
- Specify the name of the config map.
- 2
- Specify the namespace for the config map.
- 3
- In the
data
field, specify the registry names and the registry certificate content. Replace<port>
with the port where the registry server is running; for example,5000
. - 4
- Ensure that the data in the config map is defined by using
|
only instead of other methods, such as| -
. If you use other methods, issues can occur when the pod reads the certificates.
Patch the cluster-wide object,
image.config.openshift.io
to include the following specification:spec: additionalTrustedCA: - name: registry-config
As a result of this patch, the control plane nodes can retrieve images from the private registry and the HyperShift Operator can extract the OpenShift Container Platform payload for hosted cluster deployments.
The process to patch the object might take several minutes to be completed.
6.2.5.2. Adding the registry CA to the worker nodes for the hosted cluster
In order for the data plane workers in the hosted cluster to be able to retrieve images from the private registry, you need to add the registry CA to the worker nodes.
Procedure
In the
hc.spec.additionalTrustBundle
file, add the following specification:spec: additionalTrustBundle: - name: user-ca-bundle 1
- 1
- The
user-ca-bundle
entry is a config map that you create in the next step.
In the same namespace where the
HostedCluster
object is created, create theuser-ca-bundle
config map. The config map resembles the following example:apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1
- 1
- Specify the namespace where the
HostedCluster
object is created.
6.2.6. Creating a hosted cluster on OpenShift Virtualization
A hosted cluster is an OpenShift Container Platform cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane.
6.2.6.1. Requirements to deploy hosted control planes on OpenShift Virtualization
As you prepare to deploy hosted control planes on OpenShift Virtualization, consider the following information:
- Run the hub cluster and workers on the same platform for hosted control planes.
- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it.
-
Do not use
clusters
as a hosted cluster name. - A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.
- When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using Logical Volume Manager storage".
6.2.6.2. Creating a hosted cluster with the KubeVirt platform by using the CLI
To create a hosted cluster, you can use the hosted control plane command line interface, hcp
.
Procedure
Enter the following command:
$ hcp create cluster kubevirt \ --name <hosted-cluster-name> \ 1 --node-pool-replicas <worker-count> \ 2 --pull-secret <path-to-pull-secret> \ 3 --memory <value-for-memory> \ 4 --cores <value-for-cpu> \ 5 --etcd-storage-class=<etcd-storage-class> 6
- 1
- Specify the name of your hosted cluster, for instance,
example
. - 2
- Specify the worker count, for example,
2
. - 3
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 4
- Specify a value for memory, for example,
6Gi
. - 5
- Specify a value for CPU, for example,
2
. - 6
- Specify the etcd storage class name, for example,
lvm-storageclass
.
NoteYou can use the
--release-image
flag to set up the hosted cluster with a specific OpenShift Container Platform release.A default node pool is created for the cluster with two virtual machine worker replicas according to the
--node-pool-replicas
flag.After a few moments, verify that the hosted control plane pods are running by entering the following command:
$ oc -n clusters-<hosted-cluster-name> get pods
Example output
NAME READY STATUS RESTARTS AGE capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s . . . redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0 66s
A hosted cluster that has worker nodes that are backed by KubeVirt virtual machines typically takes 10-15 minutes to be fully provisioned.
To check the status of the hosted cluster, see the corresponding
HostedCluster
resource by entering the following command:$ oc get --namespace clusters hostedclusters
See the following example output, which illustrates a fully provisioned
HostedCluster
object:NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters example 4.x.0 example-admin-kubeconfig Completed True False The hosted control plane is available
Replace
4.x.0
with the supported OpenShift Container Platform version that you want to use.
6.2.6.3. Configuring the default ingress and DNS for hosted control planes on OpenShift Virtualization
Every OpenShift Container Platform cluster includes a default application Ingress Controller, which must have an wildcard DNS record associated with it. By default, hosted clusters that are created by using the HyperShift KubeVirt provider automatically become a subdomain of the OpenShift Container Platform cluster that the KubeVirt virtual machines run on.
For example, your OpenShift Container Platform cluster might have the following default ingress DNS entry:
*.apps.mgmt-cluster.example.com
As a result, a KubeVirt hosted cluster that is named guest
and that runs on that underlying OpenShift Container Platform cluster has the following default ingress:
*.apps.guest.apps.mgmt-cluster.example.com
Procedure
For the default ingress DNS to work properly, the cluster that hosts the KubeVirt virtual machines must allow wildcard DNS routes.
You can configure this behavior by entering the following command:
$ oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
When you use the default hosted cluster ingress, connectivity is limited to HTTPS traffic over port 443. Plain HTTP traffic over port 80 is rejected. This limitation applies to only the default ingress behavior.
6.2.6.4. Customizing ingress and DNS behavior
If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration.
6.2.6.4.1. Deploying a hosted cluster that specifies the base domain
To create a hosted cluster that specifies a base domain, complete the following steps.
Procedure
Enter the following command:
$ hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <value_for_memory> \ 4 --cores <value_for_cpu> \ 5 --base-domain <basedomain> 6
- 1
- Specify the name of your hosted cluster.
- 2
- Specify the worker count, for example,
2
. - 3
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 4
- Specify a value for memory, for example,
6Gi
. - 5
- Specify a value for CPU, for example,
2
. - 6
- Specify the base domain, for example,
hypershift.lab
.
As a result, the hosted cluster has an ingress wildcard that is configured for the cluster name and the base domain, for example,
.apps.example.hypershift.lab
. The hosted cluster remains inPartial
status because after you create a hosted cluster with unique base domain, you must configure the required DNS records and load balancer.View the status of your hosted cluster by entering the following command:
$ oc get --namespace clusters hostedclusters
Example output
NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available
Access the cluster by entering the following commands:
$ hcp create kubeconfig --name <hosted_cluster_name> > <hosted_cluster_name>-kubeconfig
$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get co
Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.x.0 False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress 4.x.0 True False True 28m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)
Replace
4.x.0
with the supported OpenShift Container Platform version that you want to use.
Next steps
To fix the errors in the output, complete the steps in "Setting up the load balancer" and "Setting up a wildcard DNS".
If your hosted cluster is on bare metal, you might need MetalLB to set up load balancer services. For more information, see "Configuring MetalLB".
6.2.6.4.2. Setting up the load balancer
Set up the load balancer service that routes ingress traffic to the KubeVirt VMs and assigns a wildcard DNS entry to the load balancer IP address.
Procedure
A
NodePort
service that exposes the hosted cluster ingress already exists. You can export the node ports and create the load balancer service that targets those ports.Get the HTTP node port by entering the following command:
$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}'
Note the HTTP node port value to use in the next step.
Get the HTTPS node port by entering the following command:
$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}'
Note the HTTPS node port value to use in the next step.
Create the load balancer service by entering the following command:
oc apply -f - apiVersion: v1 kind: Service metadata: labels: app: <hosted_cluster_name> name: <hosted_cluster_name>-apps namespace: clusters-<hosted_cluster_name> spec: ports: - name: https-443 port: 443 protocol: TCP targetPort: <https_node_port> 1 - name: http-80 port: 80 protocol: TCP targetPort: <http-node-port> 2 selector: kubevirt.io: virt-launcher type: LoadBalancer
6.2.6.4.3. Setting up a wildcard DNS
Set up a wildcard DNS record or CNAME that references the external IP of the load balancer service.
Procedure
Get the external IP address by entering the following command:
$ oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Example output
192.168.20.30
Configure a wildcard DNS entry that references the external IP address. View the following example DNS entry:
*.apps.<hosted_cluster_name\>.<base_domain\>.
The DNS entry must be able to route inside and outside of the cluster.
DNS resolutions example
dig +short test.apps.example.hypershift.lab 192.168.20.30
Check that hosted cluster status has moved from
Partial
toCompleted
by entering the following command:$ oc get --namespace clusters hostedclusters
Example output
NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example 4.x.0 example-admin-kubeconfig Completed True False The hosted control plane is available
Replace
4.x.0
with the supported OpenShift Container Platform version that you want to use.
6.2.7. Finishing the deployment
You can monitor the deployment of a hosted cluster from two perspectives: the control plane and the data plane.
6.2.7.1. Monitoring the control plane
While the deployment proceeds, you can monitor the control plane by gathering information about the following artifacts:
- The HyperShift Operator
-
The
HostedControlPlane
pod - The bare metal hosts
- The agents
-
The
InfraEnv
resource -
The
HostedCluster
andNodePool
resources
Procedure
Enter the following commands to monitor the control plane:
$ export KUBECONFIG=/root/.kcli/clusters/hub-ipv4/auth/kubeconfig
$ watch "oc get pod -n hypershift;echo;echo;oc get pod -n clusters-hosted-ipv4;echo;echo;oc get bmh -A;echo;echo;oc get agent -A;echo;echo;oc get infraenv -A;echo;echo;oc get hostedcluster -A;echo;echo;oc get nodepool -A;echo;echo;"
6.2.7.2. Monitoring the data plane
While the deployment proceeds, you can monitor the data plane by gathering information about the following artifacts:
- The cluster version
- The nodes, specifically, about whether the nodes joined the cluster
- The cluster Operators
Procedure
Enter the following commands:
$ oc get secret -n clusters-hosted-ipv4 admin-kubeconfig -o jsonpath='{.data.kubeconfig}' |base64 -d > /root/hc_admin_kubeconfig.yaml
$ export KUBECONFIG=/root/hc_admin_kubeconfig.yaml
$ watch "oc get clusterversion,nodes,co"
6.3. Deploying hosted control planes on bare metal in a disconnected environment
When you provision hosted control planes on bare metal, you use the Agent platform. The Agent platform and multicluster engine for Kubernetes Operator work together to enable disconnected deployments. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For an introduction to the central infrastructure management service, see Enabling the central infrastructure management service.
6.3.1. Disconnected environment architecture for bare metal
The following diagram illustrates an example architecture of a disconnected environment:
- Configure infrastructure services, including the registry certificate deployment with TLS support, web server, and DNS, to ensure that the disconnected deployment works.
Create a config map in the
openshift-config
namespace. In this example, the config map is namedregistry-config
. The content of the config map is the Registry CA certificate. The data field of the config map must contain the following key/value:-
Key:
<registry_dns_domain_name>..<port>
, for example,registry.hypershiftdomain.lab..5000:
. Ensure that you place..
after the registry DNS domain name when you specify a port. Value: The certificate content
For more information about creating a config map, see Configuring TLS certificates for a disconnected installation of hosted control planes.
-
Key:
-
Modify the
images.config.openshift.io
custom resource (CR) specification and adds a new field namedadditionalTrustedCA
with a value ofname: registry-config
. Create a config map that contains two data fields. One field contains the
registries.conf
file inRAW
format, and the other field contains the Registry CA and is namedca-bundle.crt
. The config map belongs to themulticluster-engine
namespace, and the config map name is referenced in other objects. For an example of a config map, see the following sample configuration:apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- # ... -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/openshift4" [[registry]] prefix = "" location = "registry.redhat.io/rhacm2" mirror-by-digest-only = true # ... # ...
-
In the multicluster engine Operator namespace, you create the
multiclusterengine
CR, which enables both the Agent andhypershift-addon
add-ons. The multicluster engine Operator namespace must contain the config maps to modify behavior in a disconnected deployment. The namespace also contains themulticluster-engine
,assisted-service
, andhypershift-addon-manager
pods. Create the following objects that are necessary to deploy the hosted cluster:
- Secrets: Secrets contain the pull secret, SSH key, and etcd encryption key.
- Config map: The config map contains the CA certificate of the private registry.
-
HostedCluster
: TheHostedCluster
resource defines the configuration of the cluster that the user intends to create. -
NodePool
: TheNodePool
resource identifies the node pool that references the machines to use for the data plane.
-
After you create the hosted cluster objects, the HyperShift Operator establishes the
HostedControlPlane
namespace to accommodate control plane pods. The namespace also hosts components such as Agents, bare metal hosts (BMHs), and theInfraEnv
resource. Later, you create theInfraEnv
resource, and after ISO creation, you create the BMHs and their secrets that contain baseboard management controller (BMC) credentials. -
The Metal3 Operator in the
openshift-machine-api
namespace inspects the new BMHs. Then, the Metal3 Operator tries to connect to the BMCs to start them by using the configuredLiveISO
andRootFS
values that are specified through theAgentServiceConfig
CR in the multicluster engine Operator namespace. -
After the worker nodes of the
HostedCluster
resource are started, an Agent container is started. This agent establishes contact with the Assisted Service, which orchestrates the actions to complete the deployment. Initially, you need to scale theNodePool
resource to the number of worker nodes for theHostedCluster
resource. The Assisted Service manages the remaining tasks. - At this point, you wait for the deployment process to be completed.
6.3.2. Requirements to deploy hosted control planes on bare metal in a disconnected environment
To configure hosted control planes in a disconnected environment, you must meet the following prerequisites:
- CPU: The number of CPUs provided determines how many hosted clusters can run concurrently. In general, use 16 CPUs for each node for 3 nodes. For minimal development, you can use 12 CPUs for each node for 3 nodes.
- Memory: The amount of RAM affects how many hosted clusters can be hosted. Use 48 GB of RAM for each node. For minimal development, 18 GB of RAM might be sufficient.
Storage: Use SSD storage for multicluster engine Operator.
- Management cluster: 250 GB.
- Registry: The storage needed depends on the number of releases, operators, and images that are hosted. An acceptable number might be 500 GB, preferably separated from the disk that hosts the hosted cluster.
- Web server: The storage needed depends on the number of ISOs and images that are hosted. An acceptable number might be 500 GB.
Production: For a production environment, separate the management cluster, the registry, and the web server on different disks. This example illustrates a possible configuration for production:
- Registry: 2 TB
- Management cluster: 500 GB
- Web server: 2 TB
6.3.3. Extracting the release image digest
You can extract the OpenShift Container Platform release image digest by using the tagged image.
Procedure
Obtain the image digest by running the following command:
$ oc adm release info <tagged_openshift_release_image> | grep "Pull From"
Replace
<tagged_openshift_release_image>
with the tagged image for the supported OpenShift Container Platform version, for example,quay.io/openshift-release-dev/ocp-release:4.14.0-x8_64
.Example output
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:69d1292f64a2b67227c5592c1a7d499c7d00376e498634ff8e1946bc9ccdddfe
6.3.4. Configuring the hypervisor for a disconnected installation of hosted control planes
The following information applies to virtual machine environments only.
Procedure
To deploy a virtual management cluster, access the required packages by entering the following command:
$ sudo dnf install dnsmasq radvd vim golang podman bind-utils net-tools httpd-tools tree htop strace tmux -y
Enable and start the Podman service by entering the following command:
$ systemctl enable --now podman
To use
kcli
to deploy the management cluster and other virtual components, install and configure the hypervisor by entering the following commands:$ sudo yum -y install libvirt libvirt-daemon-driver-qemu qemu-kvm
$ sudo usermod -aG qemu,libvirt $(id -un)
$ sudo newgrp libvirt
$ sudo systemctl enable --now libvirtd
$ sudo dnf -y copr enable karmab/kcli
$ sudo dnf -y install kcli
$ sudo kcli create pool -p /var/lib/libvirt/images default
$ kcli create host kvm -H 127.0.0.1 local
$ sudo setfacl -m u:$(id -un):rwx /var/lib/libvirt/images
$ kcli create network -c 192.168.122.0/24 default
Enable the network manager dispatcher to ensure that virtual machines can resolve the required domains, routes, and registries. To enable the network manager dispatcher, in the
/etc/NetworkManager/dispatcher.d/
directory, create a script namedforcedns
that contains the following content:#!/bin/bash export IP="192.168.126.1" 1 export BASE_RESOLV_CONF="/run/NetworkManager/resolv.conf" if ! [[ `grep -q "$IP" /etc/resolv.conf` ]]; then export TMP_FILE=$(mktemp /etc/forcedns_resolv.conf.XXXXXX) cp $BASE_RESOLV_CONF $TMP_FILE chmod --reference=$BASE_RESOLV_CONF $TMP_FILE sed -i -e "s/dns.base.domain.name//" -e "s/search /& dns.base.domain.name /" -e "0,/nameserver/s/nameserver/& $IP\n&/" $TMP_FILE 2 mv $TMP_FILE /etc/resolv.conf fi echo "ok"
After you create the file, add permissions by entering the following command:
$ chmod 755 /etc/NetworkManager/dispatcher.d/forcedns
-
Run the script and verify that the output returns
ok
. Configure
ksushy
to simulate baseboard management controllers (BMCs) for the virtual machines. Enter the following commands:$ sudo dnf install python3-pyOpenSSL.noarch python3-cherrypy -y
$ kcli create sushy-service --ssl --ipv6 --port 9000
$ sudo systemctl daemon-reload
$ systemctl enable --now ksushy
Test whether the service is correctly functioning by entering the following command:
$ systemctl status ksushy
If you are working in a development environment, configure the hypervisor system to allow various types of connections through different virtual networks within the environment.
NoteIf you are working in a production environment, you must establish proper rules for the
firewalld
service and configure SELinux policies to maintain a secure environment.For SELinux, enter the following command:
$ sed -i s/^SELINUX=.*$/SELINUX=permissive/ /etc/selinux/config; setenforce 0
For
firewalld
, enter the following command:$ systemctl disable --now firewalld
For
libvirtd
, enter the following commands:$ systemctl restart libvirtd
$ systemctl enable --now libvirtd
6.3.5. DNS configurations on bare metal
The API Server for the hosted cluster is exposed as a NodePort
service. A DNS entry must exist for api.<hosted_cluster_name>.<base_domain>
that points to destination where the API Server can be reached.
The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods.
Example DNS configuration
api.example.krnl.es. IN A 192.168.122.20 api.example.krnl.es. IN A 192.168.122.21 api.example.krnl.es. IN A 192.168.122.22 api-int.example.krnl.es. IN A 192.168.122.20 api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23
If you are configuring DNS for a disconnected environment on an IPv6 network, the configuration looks like the following example.
Example DNS configuration for an IPv6 network
api.example.krnl.es. IN A 2620:52:0:1306::5 api.example.krnl.es. IN A 2620:52:0:1306::6 api.example.krnl.es. IN A 2620:52:0:1306::7 api-int.example.krnl.es. IN A 2620:52:0:1306::5 api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10
If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6.
Example DNS configuration for a dual stack network
host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9]
6.3.6. Deploying a registry for hosted control planes in a disconnected environment
For development environments, deploy a small, self-hosted registry by using a Podman container. For production environments, deploy an enterprise-hosted registry, such as Red Hat Quay, Nexus, or Artifactory.
Procedure
To deploy a small registry by using Podman, complete the following steps:
As a privileged user, access the
${HOME}
directory and create the following script:#!/usr/bin/env bash set -euo pipefail PRIMARY_NIC=$(ls -1 /sys/class/net | grep -v podman | head -1) export PATH=/root/bin:$PATH export PULL_SECRET="/root/baremetal/hub/openshift_pull.json" 1 if [[ ! -f $PULL_SECRET ]];then echo "Pull Secret not found, exiting..." exit 1 fi dnf -y install podman httpd httpd-tools jq skopeo libseccomp-devel export IP=$(ip -o addr show $PRIMARY_NIC | head -1 | awk '{print $4}' | cut -d'/' -f1) REGISTRY_NAME=registry.$(hostname --long) REGISTRY_USER=dummy REGISTRY_PASSWORD=dummy KEY=$(echo -n $REGISTRY_USER:$REGISTRY_PASSWORD | base64) echo "{\"auths\": {\"$REGISTRY_NAME:5000\": {\"auth\": \"$KEY\", \"email\": \"sample-email@domain.ltd\"}}}" > /root/disconnected_pull.json mv ${PULL_SECRET} /root/openshift_pull.json.old jq ".auths += {\"$REGISTRY_NAME:5000\": {\"auth\": \"$KEY\",\"email\": \"sample-email@domain.ltd\"}}" < /root/openshift_pull.json.old > $PULL_SECRET mkdir -p /opt/registry/{auth,certs,data,conf} cat <<EOF > /opt/registry/conf/config.yml version: 0.1 log: fields: service: registry storage: cache: blobdescriptor: inmemory filesystem: rootdirectory: /var/lib/registry delete: enabled: true http: addr: :5000 headers: X-Content-Type-Options: [nosniff] health: storagedriver: enabled: true interval: 10s threshold: 3 compatibility: schema1: enabled: true EOF openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 3650 -out /opt/registry/certs/domain.crt -subj "/C=US/ST=Madrid/L=San Bernardo/O=Karmalabs/OU=Guitar/CN=$REGISTRY_NAME" -addext "subjectAltName=DNS:$REGISTRY_NAME" cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust extract htpasswd -bBc /opt/registry/auth/htpasswd $REGISTRY_USER $REGISTRY_PASSWORD podman create --name registry --net host --security-opt label=disable --replace -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/conf/config.yml:/etc/docker/registry/config.yml -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry" -e "REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v /opt/registry/certs:/certs:z -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key docker.io/library/registry:latest [ "$?" == "0" ] || !! systemctl enable --now registry
- 1
- Replace the location of the
PULL_SECRET
with the appropriate location for your setup.
Name the script file
registry.sh
and save it. When you run the script, it pulls in the following information:- The registry name, based on the hypervisor hostname
- The necessary credentials and user access details
Adjust permissions by adding the execution flag as follows:
$ chmod u+x ${HOME}/registry.sh
To run the script without any parameters, enter the following command:
$ ${HOME}/registry.sh
The script starts the server. The script uses a
systemd
service for management purposes.If you need to manage the script, you can use the following commands:
$ systemctl status
$ systemctl start
$ systemctl stop
The root folder for the registry is in the /opt/registry
directory and contains the following subdirectories:
-
certs
contains the TLS certificates. -
auth
contains the credentials. -
data
contains the registry images. -
conf
contains the registry configuration.
6.3.7. Setting up a management cluster for hosted control planes in a disconnected environment
To set up an OpenShift Container Platform management cluster, you can use dev-scripts, or if you are based on virtual machines, you can use the kcli
tool. The following instructions are specific to the kcli
tool.
Procedure
Ensure that the right networks are prepared for use in the hypervisor. The networks will host both the management and hosted clusters. Enter the following
kcli
command:$ kcli create network -c 192.168.126.0/24 -P dhcp=false -P dns=false -d 2620:52:0:1306::0/64 --domain dns.base.domain.name --nodhcp dual
where:
-
-c
specifies the CIDR for the network. -
-P dhcp=false
configures the network to disable the DHCP, which is handled by thednsmasq
that you configured. -
-P dns=false
configures the network to disable the DNS, which is also handled by thednsmasq
that you configured. -
--domain
sets the domain to search. -
dns.base.domain.name
is the DNS base domain name. -
dual
is the name of the network that you are creating.
-
After the network is created, review the following output:
[root@hypershiftbm ~]# kcli list network Listing Networks... +---------+--------+---------------------+-------+------------------+------+ | Network | Type | Cidr | Dhcp | Domain | Mode | +---------+--------+---------------------+-------+------------------+------+ | default | routed | 192.168.122.0/24 | True | default | nat | | ipv4 | routed | 2620:52:0:1306::/64 | False | dns.base.domain.name | nat | | ipv4 | routed | 192.168.125.0/24 | False | dns.base.domain.name | nat | | ipv6 | routed | 2620:52:0:1305::/64 | False | dns.base.domain.name | nat | +---------+--------+---------------------+-------+------------------+------+
[root@hypershiftbm ~]# kcli info network ipv6 Providing information about network ipv6... cidr: 2620:52:0:1306::/64 dhcp: false domain: dns.base.domain.name mode: nat plan: kvirt type: routed
Ensure that the pull secret and
kcli
plan files are in place so that you can deploy the OpenShift Container Platform management cluster:-
Confirm that the pull secret is in the same folder as the
kcli
plan, and that the pull secret file is namedopenshift_pull.json
. Add the
kcli
plan, which contains the OpenShift Container Platform definition, in themgmt-compact-hub-dual.yaml
file. Ensure that you update the file contents to match your environment:plan: hub-dual force: true version: stable tag: "4.x.y-x86_64" 1 cluster: "hub-dual" dualstack: true domain: dns.base.domain.name api_ip: 192.168.126.10 ingress_ip: 192.168.126.11 service_networks: - 172.30.0.0/16 - fd02::/112 cluster_networks: - 10.132.0.0/14 - fd01::/48 disconnected_url: registry.dns.base.domain.name:5000 disconnected_update: true disconnected_user: dummy disconnected_password: dummy disconnected_operators_version: v4.14 disconnected_operators: - name: metallb-operator - name: lvms-operator channels: - name: stable-4.14 disconnected_extra_images: - quay.io/user-name/trbsht:latest - quay.io/user-name/hypershift:BMSelfManage-v4.14-rc-v3 - registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 dualstack: true disk_size: 200 extra_disks: [200] memory: 48000 numcpus: 16 ctlplanes: 3 workers: 0 manifests: extra-manifests metal3: true network: dual users_dev: developer users_devpassword: developer users_admin: admin users_adminpassword: admin metallb_pool: dual-virtual-network metallb_ranges: - 192.168.126.150-192.168.126.190 metallb_autoassign: true apps: - users - lvms-operator - metallb-operator vmrules: - hub-bootstrap: nets: - name: ipv6 mac: aa:aa:aa:aa:10:07 - hub-ctlplane-0: nets: - name: ipv6 mac: aa:aa:aa:aa:10:01 - hub-ctlplane-1: nets: - name: ipv6 mac: aa:aa:aa:aa:10:02 - hub-ctlplane-2: nets: - name: ipv6 mac: aa:aa:aa:aa:10:03
- 1
- Replace
4.x.y
with the supported OpenShift Container Platform version you want to use.
-
Confirm that the pull secret is in the same folder as the
To provision the management cluster, enter the following command:
$ kcli create cluster openshift --pf mgmt-compact-hub-dual.yaml
Next steps
Next, configure the web server.
6.3.8. Configuring the web server for hosted control planes in a disconnected environment
You need to configure an additional web server to host the Red Hat Enterprise Linux CoreOS (RHCOS) images that are associated with the OpenShift Container Platform release that you are deploying as a hosted cluster.
Procedure
To configure the web server, complete the following steps:
Extract the
openshift-install
binary from the OpenShift Container Platform release that you want to use by entering the following command:$ oc adm -a ${LOCAL_SECRET_JSON} release extract --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}"
Run the following script. The script creates a folder in the
/opt/srv
directory. The folder contains the RHCOS images to provision the worker nodes.#!/bin/bash WEBSRV_FOLDER=/opt/srv ROOTFS_IMG_URL="$(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.pxe.rootfs.location')" 1 LIVE_ISO_URL="$(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location')" 2 mkdir -p ${WEBSRV_FOLDER}/images curl -Lk ${ROOTFS_IMG_URL} -o ${WEBSRV_FOLDER}/images/${ROOTFS_IMG_URL##*/} curl -Lk ${LIVE_ISO_URL} -o ${WEBSRV_FOLDER}/images/${LIVE_ISO_URL##*/} chmod -R 755 ${WEBSRV_FOLDER}/* ## Run Webserver podman ps --noheading | grep -q websrv-ai if [[ $? == 0 ]];then echo "Launching Registry pod..." /usr/bin/podman run --name websrv-ai --net host -v /opt/srv:/usr/local/apache2/htdocs:z quay.io/alosadag/httpd:p8080 fi
After the download is completed, a container runs to host the images on a web server. The container uses a variation of the official HTTPd image, which also enables it to work with IPv6 networks.
6.3.9. Configuring image mirroring for hosted control planes in a disconnected environment
Image mirroring is the process of fetching images from external registries, such as registry.redhat.com
or quay.io
, and storing them in your private registry.
In the following procedures, the oc-mirror
tool is used, which is a binary that uses the ImageSetConfiguration
object. In the file, you can specify the following information:
-
The OpenShift Container Platform versions to mirror. The versions are in
quay.io
. - The additional Operators to mirror. Select packages individually.
- The extra images that you want to add to the repository.
Prerequisites
Ensure that the registry server is running before you start the mirroring process.
Procedure
To configure image mirroring, complete the following steps:
-
Ensure that your
${HOME}/.docker/config.json
file is updated with the registries that you are going to mirror from and with the private registry that you plan to push the images to. By using the following example, create an
ImageSetConfiguration
object to use for mirroring. Replace values as needed to match your environment:apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: registry.<dns.base.domain.name>:5000/openshift/release/metadata:latest 1 mirror: platform: channels: - name: candidate-4.17 minVersion: 4.x.y-build 2 maxVersion: 4.x.y-build 3 type: ocp kubeVirtContainer: true 4 graph: true additionalImages: - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.17 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5
- 1
- Replace
<dns.base.domain.name>
with the DNS base domain name. - 2 3
- Replace
4.x.y-build
with the supported OpenShift Container Platform version you want to use. - 4
- Set this optional flag to
true
if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. - 5
- For deployments that use the KubeVirt provider, include this line.
Start the mirroring process by entering the following command:
$ oc-mirror --v2 --config imagesetconfig.yaml docker://${REGISTRY}
After the mirroring process is finished, you have a new folder named
oc-mirror-workspace/results-XXXXXX/
, which contains the IDMS and the catalog sources to apply on the hosted cluster.Mirror the nightly or CI versions of OpenShift Container Platform by configuring the
imagesetconfig.yaml
file as follows:apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:4.x.y-build 1 kubeVirtContainer: true 2 # ...
- 1
- Replace
4.x.y-build
with the supported OpenShift Container Platform version you want to use. - 2
- Set this optional flag to
true
if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only.
Apply the changes to the file by entering the following command:
$ oc-mirror --v2 --config imagesetconfig.yaml docker://${REGISTRY}
- Mirror the latest multicluster engine Operator images by following the steps in Install on disconnected networks.
6.3.10. Applying objects in the management cluster
After the mirroring process is complete, you need to apply two objects in the management cluster:
-
ImageContentSourcePolicy
(ICSP) orImageDigestMirrorSet
(IDMS) - Catalog sources
When you use the oc-mirror
tool, the output artifacts are in a folder named oc-mirror-workspace/results-XXXXXX/
.
The ICSP or IDMS initiates a MachineConfig
change that does not restart your nodes but restarts the kubelet on each of them. After the nodes are marked as READY
, you need to apply the newly generated catalog sources.
The catalog sources initiate actions in the openshift-marketplace
Operator, such as downloading the catalog image and processing it to retrieve all the PackageManifests
that are included in that image.
Procedure
To check the new sources, run the following command by using the new
CatalogSource
as a source:$ oc get packagemanifest
To apply the artifacts, complete the following steps:
Create the ICSP or IDMS artifacts by entering the following command:
$ oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml
Wait for the nodes to become ready, and then enter the following command:
$ oc apply -f catalogSource-XXXXXXXX-index.yaml
Mirror the OLM catalogs and configure the hosted cluster to point to the mirror.
When you use the
management
(default) OLMCatalogPlacement mode, the image stream that is used for OLM catalogs is not automatically amended with override information from the ICSP on the management cluster.-
If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the
hypershift.openshift.io/olm-catalogs-is-registry-overrides
annotation to theHostedCluster
resource. The format is"sr1=dr1,sr2=dr2"
, where the source registry string is a key and the destination registry is a value. To bypass the OLM catalog image stream mechanism, use the following four annotations on the
HostedCluster
resource to directly specify the addresses of the four images to use for OLM Operator catalogs:-
hypershift.openshift.io/certified-operators-catalog-image
-
hypershift.openshift.io/community-operators-catalog-image
-
hypershift.openshift.io/redhat-marketplace-catalog-image
-
hypershift.openshift.io/redhat-operators-catalog-image
-
-
If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the
In this case, the image stream is not created, and you must update the value of the annotations when the internal mirror is refreshed to pull in Operator updates.
Next steps
Deploy the multicluster engine Operator by completing the steps in Deploying multicluster engine Operator for a disconnected installation of hosted control planes.
Additional resources
6.3.11. Deploying multicluster engine Operator for a disconnected installation of hosted control planes
The multicluster engine for Kubernetes Operator plays a crucial role in deploying clusters across providers. If you do not have multicluster engine Operator installed, review the following documentation to understand the prerequisites and steps to install it:
6.3.11.1. Deploying AgentServiceConfig resources
The AgentServiceConfig
custom resource is an essential component of the Assisted Service add-on that is part of multicluster engine Operator. It is responsible for bare metal cluster deployment. When the add-on is enabled, you deploy the AgentServiceConfig
resource to configure the add-on.
In addition to configuring the AgentServiceConfig
resource, you need to include additional config maps to ensure that multicluster engine Operator functions properly in a disconnected environment.
Procedure
Configure the custom registries by adding the following config map, which contains the disconnected details to customize the deployment:
apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "registry.dns.base.domain.name:5000/openshift4" 1 [[registry]] prefix = "" location = "registry.redhat.io/rhacm2" mirror-by-digest-only = true # ... # ...
- 1
- Replace
dns.base.domain.name
with the DNS base domain name.
The object contains two fields:
- Custom CAs: This field contains the Certificate Authorities (CAs) that are loaded into the various processes of the deployment.
-
Registries: The
Registries.conf
field contains information about images and namespaces that need to be consumed from a mirror registry rather than the original source registry.
Configure the Assisted Service by adding the
AssistedServiceConfig
object, as shown in the following example:apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: annotations: unsupported.agent-install.openshift.io/assisted-service-configmap: assisted-service-config 1 name: agent namespace: multicluster-engine spec: mirrorRegistryRef: name: custom-registries 2 databaseStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 20Gi osImages: 3 - cpuArchitecture: x86_64 4 openshiftVersion: "4.14" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live-rootfs.x86_64.img 5 url: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live.x86_64.iso version: 414.92.202308281054-0 - cpuArchitecture: x86_64 openshiftVersion: "4.15" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live-rootfs.x86_64.img url: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live.x86_64.iso version: 415.92.202403270524-0
- 1
- The
metadata.annotations["unsupported.agent-install.openshift.io/assisted-service-configmap"]
annotation references the config map name that the Operator consumes to customize behavior. - 2
- The
spec.mirrorRegistryRef.name
annotation points to the config map that contains disconnected registry information that the Assisted Service Operator consumes. This config map adds those resources during the deployment process. - 3
- The
spec.osImages
field contains different versions available for deployment by this Operator. This field is mandatory. This example assumes that you already downloaded theRootFS
andLiveISO
files. - 4
- Add a
cpuArchitecture
subsection for every OpenShift Container Platform release that you want to deploy. In this example,cpuArchitecture
subsections are included for 4.14 and 4.15. - 5
- In the
rootFSUrl
andurl
fields, replacedns.base.domain.name
with the DNS base domain name.
Deploy all of the objects by concatenating them into a single file and applying them to the management cluster. To do so, enter the following command:
$ oc apply -f agentServiceConfig.yaml
The command triggers two pods.
Example output
assisted-image-service-0 1/1 Running 2 11d 1 assisted-service-668b49548-9m7xw 2/2 Running 5 11d 2
Next steps
Configure TLS certificates by completing the steps in Configuring TLS certificates for a disconnected installation of hosted control planes.
6.3.12. Configuring TLS certificates for a disconnected installation of hosted control planes
To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster.
6.3.12.1. Adding the registry CA to the management cluster
To add the registry CA to the management cluster, complete the following steps.
Procedure
Create a config map that resembles the following example:
apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----
- 1
- Specify the name of the config map.
- 2
- Specify the namespace for the config map.
- 3
- In the
data
field, specify the registry names and the registry certificate content. Replace<port>
with the port where the registry server is running; for example,5000
. - 4
- Ensure that the data in the config map is defined by using
|
only instead of other methods, such as| -
. If you use other methods, issues can occur when the pod reads the certificates.
Patch the cluster-wide object,
image.config.openshift.io
to include the following specification:spec: additionalTrustedCA: - name: registry-config
As a result of this patch, the control plane nodes can retrieve images from the private registry and the HyperShift Operator can extract the OpenShift Container Platform payload for hosted cluster deployments.
The process to patch the object might take several minutes to be completed.
6.3.12.2. Adding the registry CA to the worker nodes for the hosted cluster
In order for the data plane workers in the hosted cluster to be able to retrieve images from the private registry, you need to add the registry CA to the worker nodes.
Procedure
In the
hc.spec.additionalTrustBundle
file, add the following specification:spec: additionalTrustBundle: - name: user-ca-bundle 1
- 1
- The
user-ca-bundle
entry is a config map that you create in the next step.
In the same namespace where the
HostedCluster
object is created, create theuser-ca-bundle
config map. The config map resembles the following example:apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1
- 1
- Specify the namespace where the
HostedCluster
object is created.
6.3.13. Creating a hosted cluster on bare metal
A hosted cluster is an OpenShift Container Platform cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane.
6.3.13.1. Deploying hosted cluster objects
Typically, the HyperShift Operator creates the HostedControlPlane
namespace. However, in this case, you want to include all the objects before the HyperShift Operator begins to reconcile the HostedCluster
object. Then, when the Operator starts the reconciliation process, it can find all of the objects in place.
Procedure
Create a YAML file with the following information about the namespaces:
--- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace>-<hosted_cluster_name> 1 spec: {} status: {} --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace> 2 spec: {} status: {}
Create a YAML file with the following information about the config maps and secrets to include in the
HostedCluster
deployment:--- apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1 --- apiVersion: v1 data: .dockerconfigjson: xxxxxxxxx kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-pull-secret 2 namespace: <hosted_cluster_namespace> 3 --- apiVersion: v1 kind: Secret metadata: name: sshkey-cluster-<hosted_cluster_name> 4 namespace: <hosted_cluster_namespace> 5 stringData: id_rsa.pub: ssh-rsa xxxxxxxxx --- apiVersion: v1 data: key: nTPtVBEt03owkrKhIdmSW8jrWRxU57KO/fnZa8oaG0Y= kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-etcd-encryption-key 6 namespace: <hosted_cluster_namespace> 7 type: Opaque
Create a YAML file that contains the RBAC roles so that Assisted Service agents can be in the same
HostedControlPlane
namespace as the hosted control plane and still be managed by the cluster API:apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: capi-provider-role namespace: <hosted_cluster_namespace>-<hosted_cluster_name> 1 2 rules: - apiGroups: - agent-install.openshift.io resources: - agents verbs: - '*'
Create a YAML file with information about the
HostedCluster
object, replacing values as necessary:apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: additionalTrustBundle: name: "user-ca-bundle" olmCatalogPlacement: guest imageContentSources: 3 - source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: - registry.<dns.base.domain.name>:5000/openshift/release 4 - source: quay.io/openshift-release-dev/ocp-release mirrors: - registry.<dns.base.domain.name>:5000/openshift/release-images 5 - mirrors: ... ... autoscaling: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: <dns.base.domain.name> 6 etcd: managed: storage: persistentVolume: size: 8Gi restoreSnapshotURL: null type: PersistentVolume managementType: Managed fips: false networking: clusterNetwork: - cidr: 10.132.0.0/14 - cidr: fd01::/48 networkType: OVNKubernetes serviceNetwork: - cidr: 172.31.0.0/16 - cidr: fd02::/112 platform: agent: agentNamespace: <hosted_cluster_namespace>-<hosted_cluster_name> 7 8 type: Agent pullSecret: name: <hosted_cluster_name>-pull-secret 9 release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:4.x.y-x86_64 10 11 secretEncryption: aescbc: activeKey: name: <hosted_cluster_name>-etcd-encryption-key 12 type: aescbc services: - service: APIServer servicePublishingStrategy: type: LoadBalancer - service: OAuthServer servicePublishingStrategy: type: Route - service: OIDC servicePublishingStrategy: type: Route - service: Konnectivity servicePublishingStrategy: type: Route - service: Ignition servicePublishingStrategy: type: Route sshKey: name: sshkey-cluster-<hosted_cluster_name> 13 status: controlPlaneEndpoint: host: "" port: 0
- 1 7 9 12 13
- Replace
<hosted_cluster_name>
with your hosted cluster. - 2 8
- Replace
<hosted_cluster_namespace>
with the name of your hosted cluster namespace. - 3
- The
imageContentSources
section contains mirror references for user workloads within the hosted cluster. - 4 5 6 10
- Replace
<dns.base.domain.name>
with the DNS base domain name. - 11
- Replace
4.x.y
with the supported OpenShift Container Platform version you want to use.
Add an annotation in the
HostedCluster
object that points to the HyperShift Operator release in the OpenShift Container Platform release:Obtain the image payload by entering the following command:
$ oc adm release info registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-release:4.x.y-x86_64 | grep hypershift
where
<dns.base.domain.name>
is the DNS base domain name and4.x.y
is the supported OpenShift Container Platform version you want to use.Example output
hypershift sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8
By using the OpenShift Container Platform Images namespace, check the digest by entering the following command:
podman pull registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-v4.0-art-dev@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8
where
<dns.base.domain.name>
is the DNS base domain name.Example output
podman pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 Trying to pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8... Getting image source signatures Copying blob d8190195889e skipped: already exists Copying blob c71d2589fba7 skipped: already exists Copying blob d4dc6e74b6ce skipped: already exists Copying blob 97da74cc6d8f skipped: already exists Copying blob b70007a560c9 done Copying config 3a62961e6e done Writing manifest to image destination Storing signatures 3a62961e6ed6edab46d5ec8429ff1f41d6bb68de51271f037c6cb8941a007fde
The release image that is set in the
HostedCluster
object must use the digest rather than the tag; for example,quay.io/openshift-release-dev/ocp-release@sha256:e3ba11bd1e5e8ea5a0b36a75791c90f29afb0fdbe4125be4e48f69c76a5c47a0
.
Create all of the objects that you defined in the YAML files by concatenating them into a file and applying them against the management cluster. To do so, enter the following command:
$ oc apply -f 01-4.14-hosted_cluster-nodeport.yaml
Example output
NAME READY STATUS RESTARTS AGE capi-provider-5b57dbd6d5-pxlqc 1/1 Running 0 3m57s catalog-operator-9694884dd-m7zzv 2/2 Running 0 93s cluster-api-f98b9467c-9hfrq 1/1 Running 0 3m57s cluster-autoscaler-d7f95dd5-d8m5d 1/1 Running 0 93s cluster-image-registry-operator-5ff5944b4b-648ht 1/2 Running 0 93s cluster-network-operator-77b896ddc-wpkq8 1/1 Running 0 94s cluster-node-tuning-operator-84956cd484-4hfgf 1/1 Running 0 94s cluster-policy-controller-5fd8595d97-rhbwf 1/1 Running 0 95s cluster-storage-operator-54dcf584b5-xrnts 1/1 Running 0 93s cluster-version-operator-9c554b999-l22s7 1/1 Running 0 95s control-plane-operator-6fdc9c569-t7hr4 1/1 Running 0 3m57s csi-snapshot-controller-785c6dc77c-8ljmr 1/1 Running 0 77s csi-snapshot-controller-operator-7c6674bc5b-d9dtp 1/1 Running 0 93s csi-snapshot-webhook-5b8584875f-2492j 1/1 Running 0 77s dns-operator-6874b577f-9tc6b 1/1 Running 0 94s etcd-0 3/3 Running 0 3m39s hosted-cluster-config-operator-f5cf5c464-4nmbh 1/1 Running 0 93s ignition-server-6b689748fc-zdqzk 1/1 Running 0 95s ignition-server-proxy-54d4bb9b9b-6zkg7 1/1 Running 0 95s ingress-operator-6548dc758b-f9gtg 1/2 Running 0 94s konnectivity-agent-7767cdc6f5-tw782 1/1 Running 0 95s kube-apiserver-7b5799b6c8-9f5bp 4/4 Running 0 3m7s kube-controller-manager-5465bc4dd6-zpdlk 1/1 Running 0 44s kube-scheduler-5dd5f78b94-bbbck 1/1 Running 0 2m36s machine-approver-846c69f56-jxvfr 1/1 Running 0 92s oauth-openshift-79c7bf44bf-j975g 2/2 Running 0 62s olm-operator-767f9584c-4lcl2 2/2 Running 0 93s openshift-apiserver-5d469778c6-pl8tj 3/3 Running 0 2m36s openshift-controller-manager-6475fdff58-hl4f7 1/1 Running 0 95s openshift-oauth-apiserver-dbbc5cc5f-98574 2/2 Running 0 95s openshift-route-controller-manager-5f6997b48f-s9vdc 1/1 Running 0 95s packageserver-67c87d4d4f-kl7qh 2/2 Running 0 93s
When the hosted cluster is available, the output looks like the following example.
Example output
NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters hosted-dual hosted-admin-kubeconfig Partial True False The hosted control plane is available
6.3.13.2. Creating a NodePool object for the hosted cluster
A NodePool
is a scalable set of worker nodes that is associated with a hosted cluster. NodePool
machine architectures remain consistent within a specific pool and are independent of the machine architecture of the control plane.
Procedure
Create a YAML file with the following information about the
NodePool
object, replacing values as necessary:apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: creationTimestamp: null name: <hosted_cluster_name> \1 namespace: <hosted_cluster_namespace> \2 spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: false \3 upgradeType: InPlace \4 nodeDrainTimeout: 0s platform: type: Agent release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:4.x.y-x86_64 \5 replicas: 2 6 status: replicas: 2
- 1
- Replace
<hosted_cluster_name>
with your hosted cluster. - 2
- Replace
<hosted_cluster_namespace>
with the name of your hosted cluster namespace. - 3
- The
autoRepair
field is set tofalse
because the node will not be re-created if it is removed. - 4
- The
upgradeType
is set toInPlace
, which indicates that the same bare metal node is reused during an upgrade. - 5
- All of the nodes included in this
NodePool
are based on the following OpenShift Container Platform version:4.x.y-x86_64
. Replace the<dns.base.domain.name>
value with your DNS base domain name and the4.x.y
value with the supported OpenShift Container Platform version you want to use. - 6
- You can set the
replicas
value to2
to create two node pool replicas in your hosted cluster.
Create the
NodePool
object by entering the following command:$ oc apply -f 02-nodepool.yaml
Example output
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted-dual hosted 0 False False 4.x.y-x86_64
6.3.13.3. Creating an InfraEnv resource for the hosted cluster
The InfraEnv
resource is an Assisted Service object that includes essential details, such as the pullSecretRef
and the sshAuthorizedKey
. Those details are used to create the Red Hat Enterprise Linux CoreOS (RHCOS) boot image that is customized for the hosted cluster.
You can host more than one InfraEnv
resource, and each one can adopt certain types of hosts. For example, you might want to divide your server farm between a host that has greater RAM capacity.
Procedure
Create a YAML file with the following information about the
InfraEnv
resource, replacing values as necessary:apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> namespace: <hosted-cluster-namespace>-<hosted_cluster_name> 1 2 spec: pullSecretRef: 3 name: pull-secret sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk7ICaUE+/k4zTpxLk4+xFdHi4ZuDi5qjeF52afsNkw0w/glILHhwpL5gnp5WkRuL8GwJuZ1VqLC9EKrdmegn4MrmUlq7WTsP0VFOZFBfq2XRUxo1wrRdor2z0Bbh93ytR+ZsDbbLlGngXaMa0Vbt+z74FqlcajbHTZ6zBmTpBVq5RHtDPgKITdpE1fongp7+ZXQNBlkaavaqv8bnyrP4BWahLP4iO9/xJF9lQYboYwEEDzmnKLMW1VtCE6nJzEgWCufACTbxpNS7GvKtoHT/OVzw8ArEXhZXQUS1UY8zKsX2iXwmyhw5Sj6YboA8WICs4z+TrFP89LmxXY0j6536TQFyRz1iB4WWvCbH5n6W+ABV2e8ssJB1AmEy8QYNwpJQJNpSxzoKBjI73XxvPYYC/IjPFMySwZqrSZCkJYqQ023ySkaQxWZT7in4KeMu7eS2tC+Kn4deJ7KwwUycx8n6RHMeD8Qg9flTHCv3gmab8JKZJqN3hW1D378JuvmIX4V0= 4
- 1
- Replace
<hosted_cluster_name>
with your hosted cluster. - 2
- Replace
<hosted_cluster_namespace>
with the name of your hosted cluster namespace. - 3
- The
pullSecretRef
refers to the config map reference in the same namespace as theInfraEnv
, where the pull secret is used. - 4
- The
sshAuthorizedKey
represents the SSH public key that is placed in the boot image. The SSH key allows access to the worker nodes as thecore
user.
Create the
InfraEnv
resource by entering the following command:$ oc apply -f 03-infraenv.yaml
Example output
NAMESPACE NAME ISO CREATED AT clusters-hosted-dual hosted 2023-09-11T15:14:10Z
6.3.13.4. Creating worker nodes for the hosted cluster
If you are working on a bare metal platform, creating worker nodes is crucial to ensure that the details in the BareMetalHost
are correctly configured.
If you are working with virtual machines, you can complete the following steps to create empty worker nodes for the Metal3 Operator to consume. To do so, you use the kcli
tool.
Procedure
If this is not your first attempt to create worker nodes, you must first delete your previous setup. To do so, delete the plan by entering the following command:
$ kcli delete plan <hosted_cluster_name> 1
- 1
- Replace
<hosted_cluster_name>
with the name of your hosted cluster.-
When you are prompted to confirm whether you want to delete the plan, type
y
. - Confirm that you see a message stating that the plan was deleted.
-
When you are prompted to confirm whether you want to delete the plan, type
Create the virtual machines by entering the following commands:
Enter the following command to create the first virtual machine:
$ kcli create vm \ -P start=False \1 -P uefi_legacy=true \2 -P plan=<hosted_cluster_name> \3 -P memory=8192 -P numcpus=16 \4 -P disks=[200,200] \5 -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:01\"}"] \6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1101 \ -P name=<hosted_cluster_name>-worker0 7
- 1
- Include
start=False
if you do not want the virtual machine (VM) to automatically start upon creation. - 2
- Include
uefi_legacy=true
to indicate that you will use UEFI legacy boot to ensure compatibility with previous UEFI implementations. - 3
- Replace
<hosted_cluster_name>
with the name of your hosted cluster. Theplan=<hosted_cluster_name>
statement indicates the plan name, which identifies a group of machines as a cluster. - 4
- Include the
memory=8192
andnumcpus=16
parameters to specify the resources for the VM, including the RAM and CPU. - 5
- Include
disks=[200,200]
to indicate that you are creating two thin-provisioned disks in the VM. - 6
- Include
nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}]
to provide network details, including the network name to connect to, the type of network (ipv4
,ipv6
, ordual
), and the MAC address of the primary interface. - 7
- Replace
<hosted_cluster_name>
with the name of your hosted cluster.
Enter the following command to create the second virtual machine:
$ kcli create vm \ -P start=False \1 -P uefi_legacy=true \2 -P plan=<hosted_cluster_name> \3 -P memory=8192 -P numcpus=16 \4 -P disks=[200,200] \5 -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:02\"}"] \6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1102 -P name=<hosted_cluster_name>-worker1 7
- 1
- Include
start=False
if you do not want the virtual machine (VM) to automatically start upon creation. - 2
- Include
uefi_legacy=true
to indicate that you will use UEFI legacy boot to ensure compatibility with previous UEFI implementations. - 3
- Replace
<hosted_cluster_name>
with the name of your hosted cluster. Theplan=<hosted_cluster_name>
statement indicates the plan name, which identifies a group of machines as a cluster. - 4
- Include the
memory=8192
andnumcpus=16
parameters to specify the resources for the VM, including the RAM and CPU. - 5
- Include
disks=[200,200]
to indicate that you are creating two thin-provisioned disks in the VM. - 6
- Include
nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}]
to provide network details, including the network name to connect to, the type of network (ipv4
,ipv6
, ordual
), and the MAC address of the primary interface. - 7
- Replace
<hosted_cluster_name>
with the name of your hosted cluster.
Enter the following command to create the third virtual machine:
$ kcli create vm \ -P start=False \1 -P uefi_legacy=true \2 -P plan=<hosted_cluster_name> \3 -P memory=8192 -P numcpus=16 \4 -P disks=[200,200] \5 -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:03\"}"] \6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1103 -P name=<hosted_cluster_name>-worker2 7
- 1
- Include
start=False
if you do not want the virtual machine (VM) to automatically start upon creation. - 2
- Include
uefi_legacy=true
to indicate that you will use UEFI legacy boot to ensure compatibility with previous UEFI implementations. - 3
- Replace
<hosted_cluster_name>
with the name of your hosted cluster. Theplan=<hosted_cluster_name>
statement indicates the plan name, which identifies a group of machines as a cluster. - 4
- Include the
memory=8192
andnumcpus=16
parameters to specify the resources for the VM, including the RAM and CPU. - 5
- Include
disks=[200,200]
to indicate that you are creating two thin-provisioned disks in the VM. - 6
- Include
nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}]
to provide network details, including the network name to connect to, the type of network (ipv4
,ipv6
, ordual
), and the MAC address of the primary interface. - 7
- Replace
<hosted_cluster_name>
with the name of your hosted cluster.
Enter the
restart ksushy
command to restart theksushy
tool to ensure that the tool detects the VMs that you added:$ systemctl restart ksushy
Example output
+---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | Name | Status | Ip | Source | Plan | Profile | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | hosted-worker0 | down | | | hosted-dual | kvirt | | hosted-worker1 | down | | | hosted-dual | kvirt | | hosted-worker2 | down | | | hosted-dual | kvirt | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+
6.3.13.5. Creating bare metal hosts for the hosted cluster
A bare metal host is an openshift-machine-api
object that encompasses physical and logical details so that it can be identified by a Metal3 Operator. Those details are associated with other Assisted Service objects, known as agents.
Prerequisites
Before you create the bare metal host and destination nodes, you must have the destination machines ready.
Procedure
To create a bare metal host, complete the following steps:
Create a YAML file with the following information:
Because you have at least one secret that holds the bare metal host credentials, you need to create at least two objects for each worker node.
apiVersion: v1 kind: Secret metadata: name: <hosted_cluster_name>-worker0-bmc-secret \1 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \2 data: password: YWRtaW4= \3 username: YWRtaW4= \4 type: Opaque # ... apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <hosted_cluster_name>-worker0 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \5 labels: infraenvs.agent-install.openshift.io: <hosted_cluster_name> \6 annotations: inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <hosted_cluster_name>-worker0 \7 spec: automatedCleaningMode: disabled \8 bmc: disableCertificateVerification: true \9 address: redfish-virtualmedia://[192.168.126.1]:9000/redfish/v1/Systems/local/<hosted_cluster_name>-worker0 \10 credentialsName: <hosted_cluster_name>-worker0-bmc-secret \11 bootMACAddress: aa:aa:aa:aa:02:11 \12 online: true 13
- 1
- Replace
<hosted_cluster_name>
with your hosted cluster. - 2 5
- Replace
<hosted_cluster_name>
with your hosted cluster. Replace<hosted_cluster_namespace>
with the name of your hosted cluster namespace. - 3
- Specify the password of the baseboard management controller (BMC) in Base64 format.
- 4
- Specify the user name of the BMC in Base64 format.
- 6
- Replace
<hosted_cluster_name>
with your hosted cluster. Theinfraenvs.agent-install.openshift.io
field serves as the link between the Assisted Installer and theBareMetalHost
objects. - 7
- Replace
<hosted_cluster_name>
with your hosted cluster. Thebmac.agent-install.openshift.io/hostname
field represents the node name that is adopted during deployment. - 8
- The
automatedCleaningMode
field prevents the node from being erased by the Metal3 Operator. - 9
- The
disableCertificateVerification
field is set totrue
to bypass certificate validation from the client. - 10
- Replace
<hosted_cluster_name>
with your hosted cluster. Theaddress
field denotes the BMC address of the worker node. - 11
- Replace
<hosted_cluster_name>
with your hosted cluster. ThecredentialsName
field points to the secret where the user and password credentials are stored. - 12
- The
bootMACAddress
field indicates the interface MAC address that the node starts from. - 13
- The
online
field defines the state of the node after theBareMetalHost
object is created.
Deploy the
BareMetalHost
object by entering the following command:$ oc apply -f 04-bmh.yaml
During the process, you can view the following output:
This output indicates that the process is trying to reach the nodes:
Example output
NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 registering true 2s clusters-hosted hosted-worker1 registering true 2s clusters-hosted hosted-worker2 registering true 2s
This output indicates that the nodes are starting:
Example output
NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioning true 16s clusters-hosted hosted-worker1 provisioning true 16s clusters-hosted hosted-worker2 provisioning true 16s
This output indicates that the nodes started successfully:
Example output
NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioned true 67s clusters-hosted hosted-worker1 provisioned true 67s clusters-hosted hosted-worker2 provisioned true 67s
After the nodes start, notice the agents in the namespace, as shown in this example:
Example output
NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 true auto-assign
The agents represent nodes that are available for installation. To assign the nodes to a hosted cluster, scale up the node pool.
6.3.13.6. Scaling up the node pool
After you create the bare metal hosts, their statuses change from Registering
to Provisioning
to Provisioned
. The nodes start with the LiveISO
of the agent and a default pod that is named agent
. That agent is responsible for receiving instructions from the Assisted Service Operator to install the OpenShift Container Platform payload.
Procedure
To scale up the node pool, enter the following command:
$ oc -n <hosted_cluster_namespace> scale nodepool <hosted_cluster_name> --replicas 3
where:
-
<hosted_cluster_namespace>
is the name of the hosted cluster namespace. -
<hosted_cluster_name>
is the name of the hosted cluster.
-
After the scaling process is complete, notice that the agents are assigned to a hosted cluster:
Example output
NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 hosted true auto-assign
Also notice that the node pool replicas are set:
Example output
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted hosted 3 False False 4.x.y-x86_64 Minimum availability requires 3 replicas, current 0 available
Replace
4.x.y
with the supported OpenShift Container Platform version that you want to use.- Wait until the nodes join the cluster. During the process, the agents provide updates on their stage and status.
6.4. Deploying hosted control planes on IBM Z in a disconnected environment
Hosted control planes deployments in disconnected environments function differently than in a standalone OpenShift Container Platform.
Hosted control planes involves two distinct environments:
- Control plane: Located in the management cluster, where the hosted control planes pods are run and managed by the Control Plane Operator.
- Data plane: Located in the workers of the hosted cluster, where the workload and a few other pods run, managed by the Hosted Cluster Config Operator.
The ImageContentSourcePolicy
(ICSP) custom resource for the data plane is managed through the ImageContentSources
API in the hosted cluster manifest.
For the control plane, ICSP objects are managed in the management cluster. These objects are parsed by the HyperShift Operator and are shared as registry-overrides
entries with the Control Plane Operator. These entries are injected into any one of the available deployments in the hosted control planes namespace as an argument.
To work with disconnected registries in the hosted control planes, you must first create the appropriate ICSP in the management cluster. Then, to deploy disconnected workloads in the data plane, you need to add the entries that you want into the ImageContentSources
field in the hosted cluster manifest.
6.4.1. Prerequisites to deploy hosted control planes on IBM Z in a disconnected environment
- A mirror registry. For more information, see "Creating a mirror registry with mirror registry for Red Hat OpenShift".
- A mirrored image for a disconnected installation. For more information, see "Mirroring images for a disconnected installation using the oc-mirror plugin".
6.4.2. Adding credentials and the registry certificate authority to the management cluster
To pull the mirror registry images from the management cluster, you must first add credentials and the certificate authority of the mirror registry to the management cluster. Use the following procedure:
Procedure
Create a
ConfigMap
with the certificate of the mirror registry by running the following command:$ oc apply -f registry-config.yaml
Example registry-config.yaml file
apiVersion: v1 kind: ConfigMap metadata: name: registry-config namespace: openshift-config data: <mirror_registry>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----
Patch the
image.config.openshift.io
cluster-wide object to include the following entries:spec: additionalTrustedCA: - name: registry-config
Update the management cluster pull secret to add the credentials of the mirror registry.
Fetch the pull secret from the cluster in a JSON format by running the following command:
$ oc get secret/pull-secret -n openshift-config -o json | jq -r '.data.".dockerconfigjson"' | base64 -d > authfile
Edit the fetched secret JSON file to include a section with the credentials of the certificate authority:
"auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "you@example.com" } },
Update the pull secret on the cluster by running the following command:
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=authfile
6.4.3. Update the registry certificate authority in the AgentServiceConfig resource with the mirror registry
When you use a mirror registry for images, agents need to trust the registry’s certificate to securely pull images. You can add the certificate authority of the mirror registry to the AgentServiceConfig
custom resource by creating a ConfigMap
.
Prerequisites
- You must have installed multicluster engine for Kubernetes Operator.
Procedure
In the same namespace where you installed multicluster engine Operator, create a
ConfigMap
resource with the mirror registry details. ThisConfigMap
resource ensures that you grant the hosted cluster workers the capability to retrieve images from the mirror registry.Example ConfigMap file
apiVersion: v1 kind: ConfigMap metadata: name: mirror-config namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | [[registry]] location = "registry.stage.redhat.io" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "<mirror_registry>" insecure = false [[registry]] location = "registry.redhat.io/multicluster-engine" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "<mirror_registry>/multicluster-engine" 1 insecure = false
- 1
- Where:
<mirror_registry>
is the name of the mirror registry.
Patch the
AgentServiceConfig
resource to include theConfigMap
resource that you created. If theAgentServiceConfig
resource is not present, create theAgentServiceConfig
resource with the following content embedded into it:spec: mirrorRegistryRef: name: mirror-config
6.4.4. Adding the registry certificate authority to the hosted cluster
When you are deploying hosted control planes on IBM Z in a disconnected environment, include the additional-trust-bundle
and image-content-sources
resources. Those resources allow the hosted cluster to inject the certificate authority into the data plane workers so that the images are pulled from the registry.
Create the
icsp.yaml
file with theimage-content-sources
information.The
image-content-sources
information is available in theImageContentSourcePolicy
YAML file that is generated after you mirror the images by usingoc-mirror
.Example ImageContentSourcePolicy file
# cat icsp.yaml - mirrors: - <mirror_registry>/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - <mirror_registry>/openshift/release-images source: quay.io/openshift-release-dev/ocp-release
Create a hosted cluster and provide the
additional-trust-bundle
certificate to update the compute nodes with the certificates as in the following example:$ hcp create cluster agent \ --name=<hosted_cluster_name> \ 1 --pull-secret=<path_to_pull_secret> \ 2 --agent-namespace=<hosted_control_plane_namespace> \ 3 --base-domain=<basedomain> \ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \ --etcd-storage-class=<etcd_storage_class> \ 5 --ssh-key <path_to_ssh_public_key> \ 6 --namespace <hosted_cluster_namespace> \ 7 --control-plane-availability-policy SingleReplica \ --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \ 8 --additional-trust-bundle <path for cert> \ 9 --image-content-sources icsp.yaml
- 1
- Replace
<hosted_cluster_name>
with the name of your hosted cluster. - 2
- Replace the path to your pull secret, for example,
/user/name/pullsecret
. - 3
- Replace
<hosted_control_plane_namespace>
with the name of the hosted control plane namespace, for example,clusters-hosted
. - 4
- Replace the name with your base domain, for example,
example.com
. - 5
- Replace the etcd storage class name, for example,
lvm-storageclass
. - 6
- Replace the path to your SSH public key. The default file path is
~/.ssh/id_rsa.pub
. - 7 8
- Replace with the supported OpenShift Container Platform version that you want to use, for example,
4.17.0-multi
. - 9
- Replace the path to Certificate Authority of mirror registry.
6.5. Monitoring user workload in a disconnected environment
The hypershift-addon
managed cluster add-on enables the --enable-uwm-telemetry-remote-write
option in the HyperShift Operator. By enabling that option, you ensure that user workload monitoring is enabled and that it can remotely write telemetry metrics from control planes.
6.5.1. Resolving user workload monitoring issues
If you installed multicluster engine Operator on OpenShift Container Platform clusters that are not connected to the internet, when you try to run the user workload monitoring feature of the HyperShift Operator by entering the following command, the feature fails with an error:
$ oc get events -n hypershift
Example error
LAST SEEN TYPE REASON OBJECT MESSAGE 4m46s Warning ReconcileError deployment/operator Failed to ensure UWM telemetry remote write: cannot get telemeter client secret: Secret "telemeter-client" not found
To resolve the error, you must disable the user workload monitoring option by creating a config map in the local-cluster
namespace. You can create the config map either before or after you enable the add-on. The add-on agent reconfigures the HyperShift Operator.
Procedure
Create the following config map:
kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: "" installFlagsToRemove: "--enable-uwm-telemetry-remote-write"
Apply the config map by running the following command:
$ oc apply -f <filename>.yaml
6.5.2. Verifying the status of the hosted control plane feature
The hosted control plane feature is enabled by default.
Procedure
If the feature is disabled and you want to enable it, enter the following command. Replace
<multiclusterengine>
with the name of your multicluster engine Operator instance:$ oc patch mce <multiclusterengine> --type=merge -p '{"spec":{"overrides":{"components":[{"name":"hypershift","enabled": true}]}}}'
When you enable the feature, the
hypershift-addon
managed cluster add-on is installed in thelocal-cluster
managed cluster, and the add-on agent installs the HyperShift Operator on the multicluster engine Operator hub cluster.Confirm that the
hypershift-addon
managed cluster add-on is installed by entering the following command:$ oc get managedclusteraddons -n local-cluster hypershift-addon
Example output
NAME AVAILABLE DEGRADED PROGRESSING hypershift-addon True False
To avoid a timeout during this process, enter the following commands:
$ oc wait --for=condition=Degraded=True managedclusteraddons/hypershift-addon -n local-cluster --timeout=5m
$ oc wait --for=condition=Available=True managedclusteraddons/hypershift-addon -n local-cluster --timeout=5m
When the process is complete, the
hypershift-addon
managed cluster add-on and the HyperShift Operator are installed, and thelocal-cluster
managed cluster is available to host and manage hosted clusters.
6.5.3. Configuring the hypershift-addon managed cluster add-on to run on an infrastructure node
By default, no node placement preference is specified for the hypershift-addon
managed cluster add-on. Consider running the add-ons on the infrastructure nodes, because by doing so, you can prevent incurring billing costs against subscription counts and separate maintenance and management tasks.
Procedure
- Log in to the hub cluster.
Open the
hypershift-addon-deploy-config
add-on deployment configuration specification for editing by entering the following command:$ oc edit addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine
Add the
nodePlacement
field to the specification, as shown in the following example:apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: hypershift-addon-deploy-config namespace: multicluster-engine spec: nodePlacement: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists
-
Save the changes. The
hypershift-addon
managed cluster add-on is deployed on an infrastructure node for new and existing managed clusters.