Este conteúdo não está disponível no idioma selecionado.
Chapter 4. Deploying hosted control planes
4.1. Deploying hosted control planes on AWS Copiar o linkLink copiado para a área de transferência!
A hosted cluster is an OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. To configure hosted control planes on premises, you must install multicluster engine for Kubernetes Operator in a management cluster. By deploying the HyperShift Operator on an existing managed cluster by using the hypershift-addon
managed cluster add-on, you can enable that cluster as a management cluster and start to create the hosted cluster. The hypershift-addon
managed cluster add-on is enabled by default for the local-cluster
managed cluster.
You can use the multicluster engine Operator console or the hosted control plane command-line interface (CLI), hcp
, to create a hosted cluster. The hosted cluster is automatically imported as a managed cluster. However, you can disable this automatic import feature into multicluster engine Operator.
4.1.1. Preparing to deploy hosted control planes on AWS Copiar o linkLink copiado para a área de transferência!
As you prepare to deploy hosted control planes on Amazon Web Services (AWS), consider the following information:
- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it.
-
Do not use
clusters
as a hosted cluster name. - Run the management cluster and workers on the same platform for hosted control planes.
- A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.
4.1.1.1. Prerequisites to configure a management cluster Copiar o linkLink copiado para a área de transferência!
You must have the following prerequisites to configure the management cluster:
- You have installed the multicluster engine for Kubernetes Operator 2.5 and later on an OpenShift Container Platform cluster. The multicluster engine Operator is automatically installed when you install Red Hat Advanced Cluster Management (RHACM). The multicluster engine Operator can also be installed without RHACM as an Operator from the OpenShift Container Platform OperatorHub.
You have at least one managed OpenShift Container Platform cluster for the multicluster engine Operator. The
local-cluster
is automatically imported in the multicluster engine Operator version 2.5 and later. You can check the status of your hub cluster by running the following command:oc get managedclusters local-cluster
$ oc get managedclusters local-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You have installed the
aws
command-line interface (CLI). -
You have installed the hosted control plane CLI,
hcp
.
4.1.2. Creating the Amazon Web Services S3 bucket and S3 OIDC secret Copiar o linkLink copiado para a área de transferência!
Before you can create and manage hosted clusters on Amazon Web Services (AWS), you must create the S3 bucket and S3 OIDC secret.
Procedure
Create an S3 bucket that has public access to host OIDC discovery documents for your clusters by running the following commands:
aws s3api create-bucket --bucket <bucket_name> \ --create-bucket-configuration LocationConstraint=<region> \ --region <region>
$ aws s3api create-bucket --bucket <bucket_name> \
1 --create-bucket-configuration LocationConstraint=<region> \
2 --region <region>
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws s3api delete-public-access-block --bucket <bucket_name>
$ aws s3api delete-public-access-block --bucket <bucket_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<bucket_name>
with the name of the S3 bucket you are creating.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<bucket_name>
with the name of the S3 bucket you are creating.
aws s3api put-bucket-policy --bucket <bucket_name> \ --policy file://policy.json
$ aws s3api put-bucket-policy --bucket <bucket_name> \
1 --policy file://policy.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<bucket_name>
with the name of the S3 bucket you are creating.
NoteIf you are using a Mac computer, you must export the bucket name in order for the policy to work.
-
Create an OIDC S3 secret named
hypershift-operator-oidc-provider-s3-credentials
for the HyperShift Operator. -
Save the secret in the
local-cluster
namespace. See the following table to verify that the secret contains the following fields:
Expand Table 4.1. Required fields for the AWS secret Field name Description bucket
Contains an S3 bucket with public access to host OIDC discovery documents for your hosted clusters.
credentials
A reference to a file that contains the credentials of the
default
profile that can access the bucket. By default, HyperShift only uses thedefault
profile to operate thebucket
.region
Specifies the region of the S3 bucket.
To create an AWS secret, run the following command:
oc create secret generic <secret_name> \ --from-file=credentials=<path>/.aws/credentials \ --from-literal=bucket=<s3_bucket> \ --from-literal=region=<region> \ -n local-cluster
$ oc create secret generic <secret_name> \ --from-file=credentials=<path>/.aws/credentials \ --from-literal=bucket=<s3_bucket> \ --from-literal=region=<region> \ -n local-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDisaster recovery backup for the secret is not automatically enabled. To add the label that enables the
hypershift-operator-oidc-provider-s3-credentials
secret to be backed up for disaster recovery, run the following command:oc label secret hypershift-operator-oidc-provider-s3-credentials \ -n local-cluster cluster.open-cluster-management.io/backup=true
$ oc label secret hypershift-operator-oidc-provider-s3-credentials \ -n local-cluster cluster.open-cluster-management.io/backup=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.3. Creating a routable public zone for hosted clusters Copiar o linkLink copiado para a área de transferência!
To access applications in your hosted clusters, you must configure the routable public zone. If the public zone exists, skip this step. Otherwise, the public zone affects the existing functions.
Procedure
To create a routable public zone for DNS records, enter the following command:
aws route53 create-hosted-zone \ --name <basedomain> \ --caller-reference $(whoami)-$(date --rfc-3339=date)
$ aws route53 create-hosted-zone \ --name <basedomain> \
1 --caller-reference $(whoami)-$(date --rfc-3339=date)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<basedomain>
with your base domain, for example,www.example.com
.
4.1.4. Creating an AWS IAM role and STS credentials Copiar o linkLink copiado para a área de transferência!
Before creating a hosted cluster on Amazon Web Services (AWS), you must create an AWS IAM role and STS credentials.
Procedure
Get the Amazon Resource Name (ARN) of your user by running the following command:
aws sts get-caller-identity --query "Arn" --output text
$ aws sts get-caller-identity --query "Arn" --output text
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
arn:aws:iam::1234567890:user/<aws_username>
arn:aws:iam::1234567890:user/<aws_username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use this output as the value for
<arn>
in the next step.Create a JSON file that contains the trust relationship configuration for your role. See the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<arn>
with the ARN of your user that you noted in the previous step.
Create the Identity and Access Management (IAM) role by running the following command:
aws iam create-role \ --role-name <name> \ --assume-role-policy-document file://<file_name>.json \ --query "Role.Arn"
$ aws iam create-role \ --role-name <name> \
1 --assume-role-policy-document file://<file_name>.json \
2 --query "Role.Arn"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
arn:aws:iam::820196288204:role/myrole
arn:aws:iam::820196288204:role/myrole
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a JSON file named
policy.json
that contains the following permission policies for your role:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the
policy.json
file to your role by running the following command:aws iam put-role-policy \ --role-name <role_name> \ --policy-name <policy_name> \ --policy-document file://policy.json
$ aws iam put-role-policy \ --role-name <role_name> \
1 --policy-name <policy_name> \
2 --policy-document file://policy.json
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve STS credentials in a JSON file named
sts-creds.json
by running the following command:aws sts get-session-token --output json > sts-creds.json
$ aws sts get-session-token --output json > sts-creds.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sts-creds.json
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.5. Enabling AWS PrivateLink for hosted control planes Copiar o linkLink copiado para a área de transferência!
To provision hosted control planes on the Amazon Web Services (AWS) with PrivateLink, enable AWS PrivateLink for hosted control planes.
Procedure
-
Create an AWS credential secret for the HyperShift Operator and name it
hypershift-operator-private-link-credentials
. The secret must reside in the managed cluster namespace that is the namespace of the managed cluster being used as the management cluster. If you usedlocal-cluster
, create the secret in thelocal-cluster
namespace. - See the following table to confirm that the secret contains the required fields:
Field name | Description | Optional or required |
---|---|---|
| Region for use with Private Link | Required |
| The credential access key id. | Required |
| The credential access key secret. | Required |
To create an AWS secret, run the following command:
oc create secret generic <secret_name> \ --from-literal=aws-access-key-id=<aws_access_key_id> \ --from-literal=aws-secret-access-key=<aws_secret_access_key> \ --from-literal=region=<region> -n local-cluster
$ oc create secret generic <secret_name> \
--from-literal=aws-access-key-id=<aws_access_key_id> \
--from-literal=aws-secret-access-key=<aws_secret_access_key> \
--from-literal=region=<region> -n local-cluster
Disaster recovery backup for the secret is not automatically enabled. Run the following command to add the label that enables the hypershift-operator-private-link-credentials
secret to be backed up for disaster recovery:
oc label secret hypershift-operator-private-link-credentials \ -n local-cluster \ cluster.open-cluster-management.io/backup=""
$ oc label secret hypershift-operator-private-link-credentials \
-n local-cluster \
cluster.open-cluster-management.io/backup=""
4.1.6. Enabling external DNS for hosted control planes on AWS Copiar o linkLink copiado para a área de transferência!
The control plane and the data plane are separate in hosted control planes. You can configure DNS in two independent areas:
-
Ingress for workloads within the hosted cluster, such as the following domain:
*.apps.service-consumer-domain.com
. -
Ingress for service endpoints within the management cluster, such as API or OAuth endpoints through the service provider domain:
*.service-provider-domain.com
.
The input for hostedCluster.spec.dns
manages the ingress for workloads within the hosted cluster. The input for hostedCluster.spec.services.servicePublishingStrategy.route.hostname
manages the ingress for service endpoints within the management cluster.
External DNS creates name records for hosted cluster Services
that specify a publishing type of LoadBalancer
or Route
and provide a hostname for that publishing type. For hosted clusters with Private
or PublicAndPrivate
endpoint access types, only the APIServer
and OAuth
services support hostnames. For Private
hosted clusters, the DNS record resolves to a private IP address of a Virtual Private Cloud (VPC) endpoint in your VPC.
A hosted control plane exposes the following services:
-
APIServer
-
OIDC
You can expose these services by using the servicePublishingStrategy
field in the HostedCluster
specification. By default, for the LoadBalancer
and Route
types of servicePublishingStrategy
, you can publish the service in one of the following ways:
-
By using the hostname of the load balancer that is in the status of the
Service
with theLoadBalancer
type. -
By using the
status.host
field of theRoute
resource.
However, when you deploy hosted control planes in a managed service context, those methods can expose the ingress subdomain of the underlying management cluster and limit options for the management cluster lifecycle and disaster recovery.
When a DNS indirection is layered on the LoadBalancer
and Route
publishing types, a managed service operator can publish all public hosted cluster services by using a service-level domain. This architecture allows remapping on the DNS name to a new LoadBalancer
or Route
and does not expose the ingress domain of the management cluster. Hosted control planes uses external DNS to achieve that indirection layer.
You can deploy external-dns
alongside the HyperShift Operator in the hypershift
namespace of the management cluster. External DNS watches for Services
or Routes
that have the external-dns.alpha.kubernetes.io/hostname
annotation. That annotation is used to create a DNS record that points to the Service
, such as an A record, or the Route
, such as a CNAME record.
You can use external DNS on cloud environments only. For the other environments, you need to manually configure DNS and services.
For more information about external DNS, see external DNS.
4.1.6.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
Before you can set up external DNS for hosted control planes on Amazon Web Services (AWS), you must meet the following prerequisites:
- You created an external public domain.
- You have access to the AWS Route53 Management console.
- You enabled AWS PrivateLink for hosted control planes.
4.1.6.2. Setting up external DNS for hosted control planes Copiar o linkLink copiado para a área de transferência!
You can provision hosted control planes with external DNS or service-level DNS.
-
Create an Amazon Web Services (AWS) credential secret for the HyperShift Operator and name it
hypershift-operator-external-dns-credentials
in thelocal-cluster
namespace. See the following table to verify that the secret has the required fields:
Expand Table 4.3. Required fields for the AWS secret Field name Description Optional or required provider
The DNS provider that manages the service-level DNS zone.
Required
domain-filter
The service-level domain.
Required
credentials
The credential file that supports all external DNS types.
Optional when you use AWS keys
aws-access-key-id
The credential access key id.
Optional when you use the AWS DNS service
aws-secret-access-key
The credential access key secret.
Optional when you use the AWS DNS service
To create an AWS secret, run the following command:
oc create secret generic <secret_name> \ --from-literal=provider=aws \ --from-literal=domain-filter=<domain_name> \ --from-file=credentials=<path_to_aws_credentials_file> -n local-cluster
$ oc create secret generic <secret_name> \ --from-literal=provider=aws \ --from-literal=domain-filter=<domain_name> \ --from-file=credentials=<path_to_aws_credentials_file> -n local-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDisaster recovery backup for the secret is not automatically enabled. To back up the secret for disaster recovery, add the
hypershift-operator-external-dns-credentials
by entering the following command:oc label secret hypershift-operator-external-dns-credentials \ -n local-cluster \ cluster.open-cluster-management.io/backup=""
$ oc label secret hypershift-operator-external-dns-credentials \ -n local-cluster \ cluster.open-cluster-management.io/backup=""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.6.3. Creating the public DNS hosted zone Copiar o linkLink copiado para a área de transferência!
The External DNS Operator uses the public DNS hosted zone to create your public hosted cluster.
You can create the public DNS hosted zone to use as the external DNS domain-filter. Complete the following steps in the AWS Route 53 management console.
Procedure
- In the Route 53 management console, click Create hosted zone.
- On the Hosted zone configuration page, type a domain name, verify that Public hosted zone is selected as the type, and click Create hosted zone.
- After the zone is created, on the Records tab, note the values in the Value/Route traffic to column.
- In the main domain, create an NS record to redirect the DNS requests to the delegated zone. In the Value field, enter the values that you noted in the previous step.
- Click Create records.
Verify that the DNS hosted zone is working by creating a test entry in the new subzone and testing it with a
dig
command, such as in the following example:dig +short test.user-dest-public.aws.kerberos.com
$ dig +short test.user-dest-public.aws.kerberos.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
192.168.1.1
192.168.1.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a hosted cluster that sets the hostname for the
LoadBalancer
andRoute
services, enter the following command:hcp create cluster aws --name=<hosted_cluster_name> \ --endpoint-access=PublicAndPrivate \ --external-dns-domain=<public_hosted_zone> ...
$ hcp create cluster aws --name=<hosted_cluster_name> \ --endpoint-access=PublicAndPrivate \ --external-dns-domain=<public_hosted_zone> ...
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<public_hosted_zone>
with the public hosted zone that you created.
Example
services
block for the hosted clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Control Plane Operator creates the Services
and Routes
resources and annotates them with the external-dns.alpha.kubernetes.io/hostname
annotation. For Services
and Routes
, the Control Plane Operator uses a value of the hostname
parameter in the servicePublishingStrategy
field for the service endpoints. To create the DNS records, you can use a mechanism, such as the external-dns
deployment.
You can configure service-level DNS indirection for public services only. You cannot set hostname
for private services because they use the hypershift.local
private zone.
The following table shows when it is valid to set hostname
for a service and endpoint combinations:
Service | Public | PublicAndPrivate | Private |
---|---|---|---|
| Y | Y | N |
| Y | Y | N |
| Y | N | N |
| Y | N | N |
4.1.6.4. Creating a hosted cluster by using the external DNS on AWS Copiar o linkLink copiado para a área de transferência!
To create a hosted cluster by using the PublicAndPrivate
or Public
publishing strategy on Amazon Web Services (AWS), you must have the following artifacts configured in your management cluster:
- The public DNS hosted zone
- The External DNS Operator
- The HyperShift Operator
You can deploy a hosted cluster, by using the hcp
command-line interface (CLI).
Procedure
To access your management cluster, enter the following command:
export KUBECONFIG=<path_to_management_cluster_kubeconfig>
$ export KUBECONFIG=<path_to_management_cluster_kubeconfig>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the External DNS Operator is running by entering the following command:
oc get pod -n hypershift -lapp=external-dns
$ oc get pod -n hypershift -lapp=external-dns
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE external-dns-7c89788c69-rn8gp 1/1 Running 0 40s
NAME READY STATUS RESTARTS AGE external-dns-7c89788c69-rn8gp 1/1 Running 0 40s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a hosted cluster by using external DNS, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the Amazon Resource Name (ARN), for example,
arn:aws:iam::820196288204:role/myrole
. - 2
- Specify the instance type, for example,
m6i.xlarge
. - 3
- Specify the AWS region, for example,
us-east-1
. - 4
- Specify your hosted cluster name, for example,
my-external-aws
. - 5
- Specify the public hosted zone that the service consumer owns, for example,
service-consumer-domain.com
. - 6
- Specify the node replica count, for example,
2
. - 7
- Specify the path to your pull secret file.
- 8
- Specify the supported OpenShift Container Platform version that you want to use, for example,
4.17.0-multi
. - 9
- Specify the public hosted zone that the service provider owns, for example,
service-provider-domain.com
. - 10
- Set as
PublicAndPrivate
. You can use external DNS withPublic
orPublicAndPrivate
configurations only. - 11
- Specify the path to your AWS STS credentials file, for example,
/home/user/sts-creds/sts-creds.json
.
4.1.7. Creating a hosted cluster on AWS Copiar o linkLink copiado para a área de transferência!
You can create a hosted cluster on Amazon Web Services (AWS) by using the hcp
command-line interface (CLI).
By default for hosted control planes on Amazon Web Services (AWS), you use an AMD64 hosted cluster. However, you can enable hosted control planes to run on an ARM64 hosted cluster. For more information, see "Running hosted clusters on an ARM64 architecture".
For compatible combinations of node pools and hosted clusters, see the following table:
Hosted cluster | Node pools |
---|---|
AMD64 | AMD64 or ARM64 |
ARM64 | ARM64 or AMD64 |
Prerequisites
-
You have set up the hosted control plane CLI,
hcp
. -
You have enabled the
local-cluster
managed cluster as the management cluster. - You created an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials.
Procedure
To create a hosted cluster on AWS, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for instance,
example
. - 2
- Specify your infrastructure name. You must provide the same value for
<hosted_cluster_name>
and<infra_id>
. Otherwise the cluster might not appear correctly in the multicluster engine for Kubernetes Operator console. - 3
- Specify your base domain, for example,
example.com
. - 4
- Specify the path to your AWS STS credentials file, for example,
/home/user/sts-creds/sts-creds.json
. - 5
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 6
- Specify the AWS region name, for example,
us-east-1
. - 7
- Specify the node pool replica count, for example,
3
. - 8
- By default, all
HostedCluster
andNodePool
custom resources are created in theclusters
namespace. You can use the--namespace <namespace>
parameter, to create theHostedCluster
andNodePool
custom resources in a specific namespace. - 9
- Specify the Amazon Resource Name (ARN), for example,
arn:aws:iam::820196288204:role/myrole
. - 10
- If you want to indicate whether the EC2 instance runs on shared or single tenant hardware, include this field. The
--render-into
flag renders Kubernetes resources into the YAML file that you specify in this field. Then, continue to the next step to edit the YAML file.
If you included the
--render-into
flag in the previous command, edit the specified YAML file. Edit theNodePool
specification in the YAML file to indicate whether the EC2 instance should run on shared or single-tenant hardware, similar to the following example:Example YAML file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
NodePool
resource. - 2
- Specify a valid value for tenancy:
"default"
,"dedicated"
, or"host"
. Use"default"
when node pool instances run on shared hardware. Use"dedicated"
when each node pool instance runs on single-tenant hardware. Use"host"
when node pool instances run on your pre-allocated dedicated hosts.
Verification
Verify the status of your hosted cluster to check that the value of
AVAILABLE
isTrue
. Run the following command:oc get hostedclusters -n <hosted_cluster_namespace>
$ oc get hostedclusters -n <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get a list of your node pools by running the following command:
oc get nodepools --namespace <hosted_cluster_namespace>
$ oc get nodepools --namespace <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.7.1. Accessing a hosted cluster on AWS Copiar o linkLink copiado para a área de transferência!
You can access the hosted cluster by getting the kubeconfig
file and the kubeadmin
credentials directly from resources.
You must be familiar with the access secrets for hosted clusters. The hosted cluster namespace contains hosted cluster resources, and the hosted control plane namespace is where the hosted control plane runs. The secret name formats are as follows:
-
kubeconfig
secret:<hosted-cluster-namespace>-<name>-admin-kubeconfig
. For example,clusters-hypershift-demo-admin-kubeconfig
. -
kubeadmin
password secret:<hosted-cluster-namespace>-<name>-kubeadmin-password
. For example,clusters-hypershift-demo-kubeadmin-password
.
Procedure
The
kubeconfig
secret contains a Base64-encodedkubeconfig
field, which you can decode and save into a file to use with the following command:oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
kubeadmin
password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster.
4.1.7.2. Accessing a hosted cluster on AWS by using the kubeadmin credentials Copiar o linkLink copiado para a área de transferência!
After creating a hosted cluster on Amazon Web Services (AWS), you can access a hosted cluster by getting the kubeconfig
file, access secrets, and the kubeadmin
credentials.
The hosted cluster namespace contains hosted cluster resources and the access secrets. The hosted control plane runs in the hosted control plane namespace.
The secret name formats are as follows:
-
The
kubeconfig
secret:<hosted_cluster_namespace>-<name>-admin-kubeconfig
. For example,clusters-hypershift-demo-admin-kubeconfig
. -
The
kubeadmin
password secret:<hosted_cluster_namespace>-<name>-kubeadmin-password
. For example,clusters-hypershift-demo-kubeadmin-password
.
The kubeadmin
password secret is Base64-encoded and the kubeconfig
secret contains a Base64-encoded kubeconfig
configuration. You must decode the Base64-encoded kubeconfig
configuration and save it into a <hosted_cluster_name>.kubeconfig
file.
Procedure
Use your
<hosted_cluster_name>.kubeconfig
file that contains the decodedkubeconfig
configuration to access the hosted cluster. Enter the following command:oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must decode the
kubeadmin
password secret to log in to the API server or the console of the hosted cluster.
4.1.7.3. Accessing a hosted cluster on AWS by using the hcp CLI Copiar o linkLink copiado para a área de transferência!
You can access the hosted cluster by using the hcp
command-line interface (CLI).
Procedure
Generate the
kubeconfig
file by entering the following command:hcp create kubeconfig --namespace <hosted_cluster_namespace> \ --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
$ hcp create kubeconfig --namespace <hosted_cluster_namespace> \ --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you save the
kubeconfig
file, access the hosted cluster by entering the following command:oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.8. Configuring a custom API server certificate in a hosted cluster Copiar o linkLink copiado para a área de transferência!
To configure a custom certificate for the API server, specify the certificate details in the spec.configuration.apiServer
section of your HostedCluster
configuration.
You can configure a custom certificate during either day-1 or day-2 operations. However, because the service publishing strategy is immutable after you set it during hosted cluster creation, you must know what the hostname is for the Kubernetes API server that you plan to configure.
Prerequisites
You created a Kubernetes secret that contains your custom certificate in the management cluster. The secret contains the following keys:
-
tls.crt
: The certificate -
tls.key
: The private key
-
-
If your
HostedCluster
configuration includes a service publishing strategy that uses a load balancer, ensure that the Subject Alternative Names (SANs) of the certificate do not conflict with the internal API endpoint (api-int
). The internal API endpoint is automatically created and managed by your platform. If you use the same hostname in both the custom certificate and the internal API endpoint, routing conflicts can occur. The only exception to this rule is when you use AWS as the provider with eitherPrivate
orPublicAndPrivate
configurations. In those cases, the SAN conflict is managed by the platform. - The certificate must be valid for the external API endpoint.
- The validity period of the certificate aligns with your cluster’s expected life cycle.
Procedure
Create a secret with your custom certificate by entering the following command:
oc create secret tls sample-hosted-kas-custom-cert \ --cert=path/to/cert.crt \ --key=path/to/key.key \ -n <hosted_cluster_namespace>
$ oc create secret tls sample-hosted-kas-custom-cert \ --cert=path/to/cert.crt \ --key=path/to/key.key \ -n <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update your
HostedCluster
configuration with the custom certificate details, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes to your
HostedCluster
configuration by entering the following command:oc apply -f <hosted_cluster_config>.yaml
$ oc apply -f <hosted_cluster_config>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Check the API server pods to ensure that the new certificate is mounted.
- Test the connection to the API server by using the custom domain name.
-
Verify the certificate details in your browser or by using tools such as
openssl
.
4.1.9. Creating a hosted cluster in multiple zones on AWS Copiar o linkLink copiado para a área de transferência!
You can create a hosted cluster in multiple zones on Amazon Web Services (AWS) by using the hcp
command-line interface (CLI).
Prerequisites
- You created an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials.
Procedure
Create a hosted cluster in multiple zones on AWS by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for instance,
example
. - 2
- Specify the node pool replica count, for example,
2
. - 3
- Specify your base domain, for example,
example.com
. - 4
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 5
- Specify the Amazon Resource Name (ARN), for example,
arn:aws:iam::820196288204:role/myrole
. - 6
- Specify the AWS region name, for example,
us-east-1
. - 7
- Specify availability zones within your AWS region, for example,
us-east-1a
, andus-east-1b
. - 8
- Specify the path to your AWS STS credentials file, for example,
/home/user/sts-creds/sts-creds.json
.
For each specified zone, the following infrastructure is created:
- Public subnet
- Private subnet
- NAT gateway
- Private route table
A public route table is shared across public subnets.
One NodePool
resource is created for each zone. The node pool name is suffixed by the zone name. The private subnet for zone is set in spec.platform.aws.subnet.id
.
4.1.9.1. Creating a hosted cluster by providing AWS STS credentials Copiar o linkLink copiado para a área de transferência!
When you create a hosted cluster by using the hcp create cluster aws
command, you must provide an Amazon Web Services (AWS) account credentials that have permissions to create infrastructure resources for your hosted cluster.
Infrastructure resources include the following examples:
- Virtual Private Cloud (VPC)
- Subnets
- Network address translation (NAT) gateways
You can provide the AWS credentials by using the either of the following ways:
- The AWS Security Token Service (STS) credentials
- The AWS cloud provider secret from multicluster engine Operator
Procedure
To create a hosted cluster on AWS by providing AWS STS credentials, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for instance,
example
. - 2
- Specify the node pool replica count, for example,
2
. - 3
- Specify your base domain, for example,
example.com
. - 4
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 5
- Specify the path to your AWS STS credentials file, for example,
/home/user/sts-creds/sts-creds.json
. - 6
- Specify the AWS region name, for example,
us-east-1
. - 7
- Specify the Amazon Resource Name (ARN), for example,
arn:aws:iam::820196288204:role/myrole
.
4.1.10. Running hosted clusters on an ARM64 architecture Copiar o linkLink copiado para a área de transferência!
By default for hosted control planes on Amazon Web Services (AWS), you use an AMD64 hosted cluster. However, you can enable hosted control planes to run on an ARM64 hosted cluster.
For compatible combinations of node pools and hosted clusters, see the following table:
Hosted cluster | Node pools |
---|---|
AMD64 | AMD64 or ARM64 |
ARM64 | ARM64 or AMD64 |
4.1.10.1. Creating a hosted cluster on an ARM64 OpenShift Container Platform cluster Copiar o linkLink copiado para a área de transferência!
You can run a hosted cluster on an ARM64 OpenShift Container Platform cluster for Amazon Web Services (AWS) by overriding the default release image with a multi-architecture release image.
If you do not use a multi-architecture release image, the compute nodes in the node pool are not created and reconciliation of the node pool stops until you either use a multi-architecture release image in the hosted cluster or update the NodePool
custom resource based on the release image.
Prerequisites
- You must have an OpenShift Container Platform cluster with a 64-bit ARM infrastructure that is installed on AWS. For more information, see Create an OpenShift Container Platform Cluster: AWS (ARM).
- You must create an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials. For more information, see "Creating an AWS IAM role and STS credentials".
Procedure
Create a hosted cluster on an ARM64 OpenShift Container Platform cluster by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for instance,
example
. - 2
- Specify the node pool replica count, for example,
3
. - 3
- Specify your base domain, for example,
example.com
. - 4
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 5
- Specify the path to your AWS STS credentials file, for example,
/home/user/sts-creds/sts-creds.json
. - 6
- Specify the AWS region name, for example,
us-east-1
. - 7
- Specify the supported OpenShift Container Platform version that you want to use, for example,
4.17.0-multi
. If you are using a disconnected environment, replace<ocp_release_image>
with the digest image. To extract the OpenShift Container Platform release image digest, see "Extracting the OpenShift Container Platform release image digest". - 8
- Specify the Amazon Resource Name (ARN), for example,
arn:aws:iam::820196288204:role/myrole
.
4.1.10.2. Creating an ARM or AMD NodePool object on AWS hosted clusters Copiar o linkLink copiado para a área de transferência!
You can schedule application workloads that is the NodePool
objects on 64-bit ARM and AMD from the same hosted control plane. You can define the arch
field in the NodePool
specification to set the required processor architecture for the NodePool
object. The valid values for the arch
field are as follows:
-
arm64
-
amd64
Prerequisites
-
You must have a multi-architecture image for the
HostedCluster
custom resource to use. You can access multi-architecture nightly images.
Procedure
Add an ARM or AMD
NodePool
object to the hosted cluster on AWS by running the following command:hcp create nodepool aws \ --cluster-name <hosted_cluster_name> \ --name <node_pool_name> \ --node-count <node_pool_replica_count> \ --arch <architecture>
$ hcp create nodepool aws \ --cluster-name <hosted_cluster_name> \
1 --name <node_pool_name> \
2 --node-count <node_pool_replica_count> \
3 --arch <architecture>
4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.11. Creating a private hosted cluster on AWS Copiar o linkLink copiado para a área de transferência!
After you enable the local-cluster
as the hosting cluster, you can deploy a hosted cluster or a private hosted cluster on Amazon Web Services (AWS).
By default, hosted clusters are publicly accessible through public DNS and the default router for the management cluster.
For private clusters on AWS, all communication with the hosted cluster occurs over AWS PrivateLink.
Prerequisites
- You enabled AWS PrivateLink. For more information, see "Enabling AWS PrivateLink".
- You created an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials. For more information, see "Creating an AWS IAM role and STS credentials" and "Identity and Access Management (IAM) permissions".
- You configured a bastion instance on AWS.
Procedure
Create a private hosted cluster on AWS by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for instance,
example
. - 2
- Specify the node pool replica count, for example,
3
. - 3
- Specify your base domain, for example,
example.com
. - 4
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 5
- Specify the path to your AWS STS credentials file, for example,
/home/user/sts-creds/sts-creds.json
. - 6
- Specify the AWS region name, for example,
us-east-1
. - 7
- Defines whether a cluster is public or private.
- 8
- Specify the Amazon Resource Name (ARN), for example,
arn:aws:iam::820196288204:role/myrole
. For more information about ARN roles, see "Identity and Access Management (IAM) permissions".
The following API endpoints for the hosted cluster are accessible through a private DNS zone:
-
api.<hosted_cluster_name>.hypershift.local
-
*.apps.<hosted_cluster_name>.hypershift.local
4.2. Deploying hosted control planes on bare metal Copiar o linkLink copiado para a área de transferência!
You can deploy hosted control planes by configuring a cluster to function as a management cluster. The management cluster is the OpenShift Container Platform cluster where the control planes are hosted. In some contexts, the management cluster is also known as the hosting cluster.
The management cluster is not the same thing as the managed cluster. A managed cluster is a cluster that the hub cluster manages.
The hosted control planes feature is enabled by default.
The multicluster engine Operator supports only the default local-cluster
, which is a hub cluster that is managed, and the hub cluster as the management cluster. If you have Red Hat Advanced Cluster Management installed, you can use the managed hub cluster, also known as the local-cluster
, as the management cluster.
A hosted cluster is an OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hosted control plane command-line interface (hcp
) to create a hosted cluster.
The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator".
4.2.1. Preparing to deploy hosted control planes on bare metal Copiar o linkLink copiado para a área de transferência!
As you prepare to deploy hosted control planes on bare metal, consider the following information:
- Run the management cluster and workers on the same platform for hosted control planes.
-
All bare metal hosts require a manual start with a Discovery Image ISO that the central infrastructure management provides. You can start the hosts manually or through automation by using
Cluster-Baremetal-Operator
. After each host starts, it runs an Agent process to discover the host details and complete the installation. AnAgent
custom resource represents each host. - When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using logical volume manager storage".
4.2.1.1. Prerequisites to configure a management cluster Copiar o linkLink copiado para a área de transferência!
- You need the multicluster engine for Kubernetes Operator 2.2 and later installed on an OpenShift Container Platform cluster. You can install multicluster engine Operator as an Operator from the OpenShift Container Platform OperatorHub.
The multicluster engine Operator must have at least one managed OpenShift Container Platform cluster. The
local-cluster
is automatically imported in multicluster engine Operator 2.2 and later. For more information about thelocal-cluster
, see Advanced configuration in the Red Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:oc get managedclusters local-cluster
$ oc get managedclusters local-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You must add the
topology.kubernetes.io/zone
label to your bare-metal hosts on your management cluster. Ensure that each host has a unique value fortopology.kubernetes.io/zone
. Otherwise, all of the hosted control plane pods are scheduled on a single node, causing a single point of failure. - To provision hosted control planes on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see Enabling the central infrastructure management service.
- You need to install the hosted control plane command-line interface.
4.2.1.2. Bare metal firewall, port, and service requirements Copiar o linkLink copiado para a área de transferência!
You must meet the firewall, port, and service requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters.
Services run on their default ports. However, if you use the NodePort
publishing strategy, services run on the port that is assigned by the NodePort
service.
Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary. For production deployments, use a load balancer to simplify access through a single IP address.
If your hub cluster has a proxy configuration, ensure that it can reach the hosted cluster API endpoint by adding all hosted cluster API endpoints to the noProxy
field on the Proxy
object. For more information, see "Configuring the cluster-wide proxy".
A hosted control plane exposes the following services on bare metal:
APIServer
-
The
APIServer
service runs on port 6443 by default and requires ingress access for communication between the control plane components. - If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses.
-
The
OAuthServer
-
The
OAuthServer
service runs on port 443 by default when you use the route and ingress to expose the service. -
If you use the
NodePort
publishing strategy, use a firewall rule for theOAuthServer
service.
-
The
Konnectivity
-
The
Konnectivity
service runs on port 443 by default when you use the route and ingress to expose the service. -
The
Konnectivity
agent establishes a reverse tunnel to allow the control plane to access the network for the hosted cluster. The agent uses egress to connect to theKonnectivity
server. The server is exposed by using either a route on port 443 or a manually assignedNodePort
. - If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443.
- If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes.
-
The
Ignition
-
The
Ignition
service runs on port 443 by default when you use the route and ingress to expose the service. -
If you use the
NodePort
publishing strategy, use a firewall rule for theIgnition
service.
-
The
You do not need the following services on bare metal:
-
OVNSbDb
-
OIDC
4.2.1.3. Bare metal infrastructure requirements Copiar o linkLink copiado para a área de transferência!
The Agent platform does not create any infrastructure, but it does have the following requirements for infrastructure:
- Agents: An Agent represents a host that is booted with a discovery image and is ready to be provisioned as an OpenShift Container Platform node.
- DNS: The API and ingress endpoints must be routable.
4.2.2. DNS configurations on bare metal Copiar o linkLink copiado para a área de transferência!
The API Server for the hosted cluster is exposed as a NodePort
service. A DNS entry must exist for api.<hosted_cluster_name>.<base_domain>
that points to destination where the API Server can be reached.
The DNS entry can be as simple as a record that points to one of the nodes in the management cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods.
Example DNS configuration
In the previous example, *.apps.example.krnl.es. IN A 192.168.122.23
is either a node in the hosted cluster or a load balancer, if one has been configured.
If you are configuring DNS for a disconnected environment on an IPv6 network, the configuration looks like the following example.
Example DNS configuration for an IPv6 network
If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6.
Example DNS configuration for a dual stack network
4.2.3. Creating an InfraEnv resource Copiar o linkLink copiado para a área de transferência!
Before you can create a hosted cluster on bare metal, you need an InfraEnv
resource.
4.2.3.1. Creating an InfraEnv resource and adding nodes Copiar o linkLink copiado para a área de transferência!
On hosted control planes, the control-plane components run as pods on the management cluster while the data plane runs on dedicated nodes. You can use the Assisted Service to boot your hardware with a discovery ISO that adds your hardware to a hardware inventory. Later, when you create a hosted cluster, the hardware from the inventory is used to provision the data-plane nodes. The object that is used to get the discovery ISO is an InfraEnv
resource. You need to create a BareMetalHost
object that configures the cluster to boot the bare-metal node from the discovery ISO.
Procedure
Create a namespace to store your hardware inventory by entering the following command:
oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig create \ namespace <namespace_example>
$ oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig create \ namespace <namespace_example>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <directory_example>
-
Is the name of the directory where the
kubeconfig
file for the management cluster is saved. - <namespace_example>
Is the name of the namespace that you are creating; for example,
hardware-inventory
.Example output
namespace/hardware-inventory created
namespace/hardware-inventory created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Copy the pull secret of the management cluster by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <directory_example>
-
Is the name of the directory where the
kubeconfig
file for the management cluster is saved. - <namespace_example>
Is the name of the namespace that you are creating; for example,
hardware-inventory
.Example output
secret/pull-secret created
secret/pull-secret created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the
InfraEnv
resource by adding the following content to a YAML file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes to the YAML file by entering the following command:
oc apply -f <infraenv_config>.yaml
$ oc apply -f <infraenv_config>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<infraenv_config>
with the name of your file.Verify that the
InfraEnv
resource was created by entering the following command:oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig \ -n <namespace_example> get infraenv hosted
$ oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig \ -n <namespace_example> get infraenv hosted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add bare-metal hosts by following one of two methods:
If you do not use the Metal3 Operator, obtain the discovery ISO from the
InfraEnv
resource and boot the hosts manually by completing the following steps:Download the live ISO by entering the following commands:
oc get infraenv -A
$ oc get infraenv -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get infraenv <namespace_example> -o jsonpath='{.status.isoDownloadURL}' -n <namespace_example> <iso_url>
$ oc get infraenv <namespace_example> -o jsonpath='{.status.isoDownloadURL}' -n <namespace_example> <iso_url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Boot the ISO. The node communicates with the Assisted Service and registers as an agent in the same namespace as the
InfraEnv
resource. For each agent, set the installation disk ID and hostname, and approve it to indicate that the agent is ready for use. Enter the following commands:
oc -n <hosted_control_plane_namespace> get agents
$ oc -n <hosted_control_plane_namespace> get agents
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 auto-assign e57a637f-745b-496e-971d-1abbf03341ba auto-assign
NAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 auto-assign e57a637f-745b-496e-971d-1abbf03341ba auto-assign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n <hosted_control_plane_namespace> \ patch agent 86f7ac75-4fc4-4b36-8130-40fa12602218 \ -p '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-0.example.krnl.es"}}' \ --type merge
$ oc -n <hosted_control_plane_namespace> \ patch agent 86f7ac75-4fc4-4b36-8130-40fa12602218 \ -p '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-0.example.krnl.es"}}' \ --type merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n <hosted_control_plane_namespace> \ patch agent 23d0c614-2caa-43f5-b7d3-0b3564688baa -p \ '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-1.example.krnl.es"}}' \ --type merge
$ oc -n <hosted_control_plane_namespace> \ patch agent 23d0c614-2caa-43f5-b7d3-0b3564688baa -p \ '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-1.example.krnl.es"}}' \ --type merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n <hosted_control_plane_namespace> get agents
$ oc -n <hosted_control_plane_namespace> get agents
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 true auto-assign e57a637f-745b-496e-971d-1abbf03341ba true auto-assign
NAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 true auto-assign e57a637f-745b-496e-971d-1abbf03341ba true auto-assign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you use the Metal3 Operator, you can automate the bare-metal host registration by creating the following objects:
Create a YAML file and add the following content to it:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <namespace_example>
- Is the your namespace.
- <password>
- Is the password for your secret.
- <username>
- Is the user name for your secret.
- <bmc_address>
Is the BMC address for the
BareMetalHost
object.NoteWhen you apply this YAML file, the following objects are created:
- Secrets with credentials for the Baseboard Management Controller (BMCs)
-
The
BareMetalHost
objects - A role for the HyperShift Operator to be able to manage the agents
Notice how the
InfraEnv
resource is referenced in theBareMetalHost
objects by using theinfraenvs.agent-install.openshift.io: hosted
custom label. This ensures that the nodes are booted with the ISO generated.
Apply the changes to the YAML file by entering the following command:
oc apply -f <bare_metal_host_config>.yaml
$ oc apply -f <bare_metal_host_config>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<bare_metal_host_config>
with the name of your file.
Enter the following command, and then wait a few minutes for the
BareMetalHost
objects to move to theProvisioning
state:oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig -n <namespace_example> get bmh
$ oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig -n <namespace_example> get bmh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATE CONSUMER ONLINE ERROR AGE hosted-worker0 provisioning true 106s hosted-worker1 provisioning true 106s hosted-worker2 provisioning true 106s
NAME STATE CONSUMER ONLINE ERROR AGE hosted-worker0 provisioning true 106s hosted-worker1 provisioning true 106s hosted-worker2 provisioning true 106s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to verify that nodes are booting and showing up as agents. This process can take a few minutes, and you might need to enter the command more than once.
oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig -n <namespace_example> get agent
$ oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig -n <namespace_example> get agent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER APPROVED ROLE STAGE aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201 true auto-assign aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0202 true auto-assign aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0203 true auto-assign
NAME CLUSTER APPROVED ROLE STAGE aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201 true auto-assign aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0202 true auto-assign aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0203 true auto-assign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3.2. Creating an InfraEnv resource by using the console Copiar o linkLink copiado para a área de transferência!
To create an InfraEnv
resource by using the console, complete the following steps.
Procedure
- Open the OpenShift Container Platform web console and log in by entering your administrator credentials. For instructions to open the console, see "Accessing the web console".
- In the console header, ensure that All Clusters is selected.
-
Click Infrastructure
Host inventory Create infrastructure environment. -
After you create the
InfraEnv
resource, add bare-metal hosts from within the InfraEnv view by clicking Add hosts and selecting from the available options.
4.2.4. Creating a hosted cluster on bare metal Copiar o linkLink copiado para a área de transferência!
You can create a hosted cluster or import one. When the Assisted Installer is enabled as an add-on to multicluster engine Operator and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
4.2.4.1. Creating a hosted cluster by using the CLI Copiar o linkLink copiado para a área de transferência!
On bare-metal infrastructure, you can create or import a hosted cluster. After you enable the Assisted Installer as an add-on to multicluster engine Operator and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. The Agent Cluster API provider connects a management cluster that hosts the control plane and a hosted cluster that consists of only the compute nodes.
Prerequisites
- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster. Otherwise, the multicluster engine Operator cannot manage the hosted cluster.
-
Do not use the word
clusters
as a hosted cluster name. - You cannot create a hosted cluster in the namespace of a multicluster engine Operator managed cluster.
- For best security and management practices, create a hosted cluster separate from other hosted clusters.
- Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).
-
By default when you use the
hcp create cluster agent
command, the command creates a hosted cluster with configured node ports. The preferred publishing strategy for hosted clusters on bare metal exposes services through a load balancer. If you create a hosted cluster by using the web console or by using Red Hat Advanced Cluster Management, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify theservicePublishingStrategy
information in theHostedCluster
custom resource. Ensure that you meet the requirements described in "Requirements for hosted control planes on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:
oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
$ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
$ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that you have added bare-metal nodes to a hardware inventory.
Procedure
Create a namespace by entering the following command:
oc create ns <hosted_cluster_namespace>
$ oc create ns <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<hosted_cluster_namespace>
with an identifier for your hosted cluster namespace. The HyperShift Operator creates the namespace. During the hosted cluster creation process on bare-metal infrastructure, a generated Cluster API provider role requires that the namespace already exists.Create the configuration file for your hosted cluster by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, such as
example
. - 2
- Specify the path to your pull secret, such as
/user/name/pullsecret
. - 3
- Specify your hosted control plane namespace, such as
clusters-example
. Ensure that agents are available in this namespace by using theoc get agent -n <hosted_control_plane_namespace>
command. - 4
- Specify your base domain, such as
krnl.es
. - 5
- The
--api-server-address
flag defines the IP address that gets used for the Kubernetes API communication in the hosted cluster. If you do not set the--api-server-address
flag, you must log in to connect to the management cluster. - 6
- Specify the etcd storage class name, such as
lvm-storageclass
. - 7
- Specify the path to your SSH public key. The default file path is
~/.ssh/id_rsa.pub
. - 8
- Specify your hosted cluster namespace.
- 9
- Specify the availability policy for the hosted control plane components. Supported options are
SingleReplica
andHighlyAvailable
. The default value isHighlyAvailable
. - 10
- Specify the supported OpenShift Container Platform version that you want to use, such as
4.19.0-multi
. If you are using a disconnected environment, replace<ocp_release_image>
with the digest image. To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest. - 11
- Specify the node pool replica count, such as
3
. You must specify the replica count as0
or greater to create the same number of replicas. Otherwise, you do not create node pools. - 12
- After the
--ssh-key
flag, specify the path to the SSH key, such asuser/.ssh/id_rsa
.
Configure the service publishing strategy. By default, hosted clusters use the
NodePort
service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.-
If you are using the default
NodePort
strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal". For production environments, use the
LoadBalancer
strategy because this strategy provides certificate handling and automatic DNS resolution. The following example demonstrates changing the service publishingLoadBalancer
strategy in your hosted cluster configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify
LoadBalancer
as the API Server type. For all other services, specifyRoute
as the type.
-
If you are using the default
Apply the changes to the hosted cluster configuration file by entering the following command:
oc apply -f hosted_cluster_config.yaml
$ oc apply -f hosted_cluster_config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check for the creation of the hosted cluster, node pools, and pods by entering the following commands:
oc get hostedcluster \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
$ oc get hostedcluster \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get nodepool \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
$ oc get nodepool \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -n <hosted_cluster_namespace>
$ oc get pods -n <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Confirm that the hosted cluster is ready. The status of
Available: True
indicates the readiness of the cluster and the node pool status showsAllMachinesReady: True
. These statuses indicate the healthiness of all cluster Operators. Install MetalLB in the hosted cluster:
Extract the
kubeconfig
file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands:Copy to Clipboard Copied! Toggle word wrap Toggle overflow export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
$ export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the MetalLB Operator by creating the
install-metallb-operator.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the file by entering the following command:
oc apply -f install-metallb-operator.yaml
$ oc apply -f install-metallb-operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the MetalLB IP address pool by creating the
deploy-metallb-ipaddresspool.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by entering the following command:
oc apply -f deploy-metallb-ipaddresspool.yaml
$ oc apply -f deploy-metallb-ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the installation of MetalLB by checking the Operator status, the IP address pool, and the
L2Advertisement
resource by entering the following commands:oc get pods -n metallb-system
$ oc get pods -n metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ipaddresspool -n metallb-system
$ oc get ipaddresspool -n metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get l2advertisement -n metallb-system
$ oc get l2advertisement -n metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the load balancer for ingress:
Create the
ingress-loadbalancer.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by entering the following command:
oc apply -f ingress-loadbalancer.yaml
$ oc apply -f ingress-loadbalancer.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the load balancer service works as expected by entering the following command:
oc get svc metallb-ingress -n openshift-ingress
$ oc get svc metallb-ingress -n openshift-ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metallb-ingress LoadBalancer 172.31.127.129 10.11.176.71 80:30961/TCP,443:32090/TCP 16h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metallb-ingress LoadBalancer 172.31.127.129 10.11.176.71 80:30961/TCP,443:32090/TCP 16h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the DNS to work with the load balancer:
-
Configure the DNS for the
apps
domain by pointing the*.apps.<hosted_cluster_namespace>.<base_domain>
wildcard DNS record to the load balancer IP address. Verify the DNS resolution by entering the following command:
nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
$ nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Server: 10.11.176.1 Address: 10.11.176.1#53 Name: console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com Address: 10.11.176.71
Server: 10.11.176.1 Address: 10.11.176.1#53 Name: console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com Address: 10.11.176.71
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Configure the DNS for the
Verification
Check the cluster Operators by entering the following command:
oc get clusteroperators
$ oc get clusteroperators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that all Operators show
AVAILABLE: True
,PROGRESSING: False
, andDEGRADED: False
.Check the nodes by entering the following command:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that each node has the
READY
status.Test access to the console by entering the following URL in a web browser:
https://console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain>
https://console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.4.2. Creating a hosted cluster on bare metal by using the console Copiar o linkLink copiado para a área de transferência!
To create a hosted cluster by using the console, complete the following steps.
Procedure
- Open the OpenShift Container Platform web console and log in by entering your administrator credentials. For instructions to open the console, see "Accessing the web console".
- In the console header, ensure that All Clusters is selected.
-
Click Infrastructure
Clusters. Click Create cluster
Host inventory Hosted control plane. The Create cluster page is displayed.
On the Create cluster page, follow the prompts to enter details about the cluster, node pools, networking, and automation.
NoteAs you enter details about the cluster, you might find the following tips useful:
- If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see "Creating a credential for an on-premises environment".
- On the Cluster details page, the pull secret is your OpenShift Container Platform pull secret that you use to access OpenShift Container Platform resources. If you selected a host inventory credential, the pull secret is automatically populated.
- On the Node pools page, the namespace contains the hosts for the node pool. If you created a host inventory by using the console, the console creates a dedicated namespace.
-
On the Networking page, you select an API server publishing strategy. The API server for the hosted cluster can be exposed either by using an existing load balancer or as a service of the
NodePort
type. A DNS entry must exist for theapi.<hosted_cluster_name>.<base_domain>
setting that points to the destination where the API server can be reached. This entry can be a record that points to one of the nodes in the management cluster or a record that points to a load balancer that redirects incoming traffic to the Ingress pods.
Review your entries and click Create.
The Hosted cluster view is displayed.
- Monitor the deployment of the hosted cluster in the Hosted cluster view.
- If you do not see information about the hosted cluster, ensure that All Clusters is selected, then click the cluster name.
- Wait until the control plane components are ready. This process can take a few minutes.
- To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster.
Next steps
- To access the web console, see Accessing the web console.
4.2.4.3. Creating a hosted cluster on bare metal by using a mirror registry Copiar o linkLink copiado para a área de transferência!
You can use a mirror registry to create a hosted cluster on bare metal by specifying the --image-content-sources
flag in the hcp create cluster
command.
Procedure
Create a YAML file to define Image Content Source Policies (ICSP). See the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the file as
icsp.yaml
. This file contains your mirror registries. To create a hosted cluster by using your mirror registries, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for instance,
example
. - 2
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 3
- Specify your hosted control plane namespace, for example,
clusters-example
. Ensure that agents are available in this namespace by using theoc get agent -n <hosted-control-plane-namespace>
command. - 4
- Specify your base domain, for example,
krnl.es
. - 5
- The
--api-server-address
flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the--api-server-address
flag, you must log in to connect to the management cluster. - 6
- Specify the
icsp.yaml
file that defines ICSP and your mirror registries. - 7
- Specify the path to your SSH public key. The default file path is
~/.ssh/id_rsa.pub
. - 8
- Specify your hosted cluster namespace.
- 9
- Specify the supported OpenShift Container Platform version that you want to use, for example,
4.17.0-multi
. If you are using a disconnected environment, replace<ocp_release_image>
with the digest image. To extract the OpenShift Container Platform release image digest, see "Extracting the OpenShift Container Platform release image digest".
Next steps
- To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment.
- To access a hosted cluster, see Accessing the hosted cluster.
- To add hosts to the host inventory by using the Discovery Image, see Adding hosts to the host inventory by using the Discovery Image.
- To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest.
4.2.5. Verifying hosted cluster creation Copiar o linkLink copiado para a área de transferência!
After the deployment process is complete, you can verify that the hosted cluster was created successfully. Follow these steps a few minutes after you create the hosted cluster.
Procedure
Obtain the kubeconfig for your new hosted cluster by entering the extract command:
oc extract -n <hosted-control-plane-namespace> secret/admin-kubeconfig \ --to=- > kubeconfig-<hosted-cluster-name>
$ oc extract -n <hosted-control-plane-namespace> secret/admin-kubeconfig \ --to=- > kubeconfig-<hosted-cluster-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the kubeconfig to view the cluster Operators of the hosted cluster. Enter the following command:
oc get co --kubeconfig=kubeconfig-<hosted-cluster-name>
$ oc get co --kubeconfig=kubeconfig-<hosted-cluster-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.10.26 True False False 2m38s dns 4.10.26 True False False 2m52s image-registry 4.10.26 True False False 2m8s ingress 4.10.26 True False False 22m
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.10.26 True False False 2m38s dns 4.10.26 True False False 2m52s image-registry 4.10.26 True False False 2m8s ingress 4.10.26 True False False 22m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also view the running pods on your hosted cluster by entering the following command:
oc get pods -A --kubeconfig=kubeconfig-<hosted-cluster-name>
$ oc get pods -A --kubeconfig=kubeconfig-<hosted-cluster-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.6. Configuring a custom API server certificate in a hosted cluster Copiar o linkLink copiado para a área de transferência!
To configure a custom certificate for the API server, specify the certificate details in the spec.configuration.apiServer
section of your HostedCluster
configuration.
You can configure a custom certificate during either day-1 or day-2 operations. However, because the service publishing strategy is immutable after you set it during hosted cluster creation, you must know what the hostname is for the Kubernetes API server that you plan to configure.
Prerequisites
You created a Kubernetes secret that contains your custom certificate in the management cluster. The secret contains the following keys:
-
tls.crt
: The certificate -
tls.key
: The private key
-
-
If your
HostedCluster
configuration includes a service publishing strategy that uses a load balancer, ensure that the Subject Alternative Names (SANs) of the certificate do not conflict with the internal API endpoint (api-int
). The internal API endpoint is automatically created and managed by your platform. If you use the same hostname in both the custom certificate and the internal API endpoint, routing conflicts can occur. The only exception to this rule is when you use AWS as the provider with eitherPrivate
orPublicAndPrivate
configurations. In those cases, the SAN conflict is managed by the platform. - The certificate must be valid for the external API endpoint.
- The validity period of the certificate aligns with your cluster’s expected life cycle.
Procedure
Create a secret with your custom certificate by entering the following command:
oc create secret tls sample-hosted-kas-custom-cert \ --cert=path/to/cert.crt \ --key=path/to/key.key \ -n <hosted_cluster_namespace>
$ oc create secret tls sample-hosted-kas-custom-cert \ --cert=path/to/cert.crt \ --key=path/to/key.key \ -n <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update your
HostedCluster
configuration with the custom certificate details, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes to your
HostedCluster
configuration by entering the following command:oc apply -f <hosted_cluster_config>.yaml
$ oc apply -f <hosted_cluster_config>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Check the API server pods to ensure that the new certificate is mounted.
- Test the connection to the API server by using the custom domain name.
-
Verify the certificate details in your browser or by using tools such as
openssl
.
4.3. Deploying hosted control planes on OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
With hosted control planes and OpenShift Virtualization, you can create OpenShift Container Platform clusters with worker nodes that are hosted by KubeVirt virtual machines. Hosted control planes on OpenShift Virtualization provides several benefits:
- Enhances resource usage by packing hosted control planes and hosted clusters in the same underlying bare-metal infrastructure
- Separates hosted control planes and hosted clusters to provide strong isolation
- Reduces cluster provision time by eliminating the bare-metal node bootstrapping process
- Manages many releases under the same base OpenShift Container Platform cluster
The hosted control planes feature is enabled by default.
You can use the hosted control plane command-line interface, hcp
, to create an OpenShift Container Platform hosted cluster. The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator".
4.3.1. Requirements to deploy hosted control planes on OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
As you prepare to deploy hosted control planes on OpenShift Virtualization, consider the following information:
- Run the management cluster on bare metal.
- Each hosted cluster must have a cluster-wide unique name.
-
Do not use
clusters
as a hosted cluster name. - A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.
- When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using Logical Volume Manager storage".
4.3.1.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
You must meet the following prerequisites to create an OpenShift Container Platform cluster on OpenShift Virtualization:
-
You have administrator access to an OpenShift Container Platform cluster, version 4.14 or later, specified in the
KUBECONFIG
environment variable. The OpenShift Container Platform management cluster must have wildcard DNS routes enabled, as shown in the following command:
oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
$ oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The OpenShift Container Platform management cluster has OpenShift Virtualization, version 4.14 or later, installed on it. For more information, see "Installing OpenShift Virtualization using the web console".
- The OpenShift Container Platform management cluster is on-premise bare metal.
- The OpenShift Container Platform management cluster is configured with OVNKubernetes as the default pod network CNI.
The OpenShift Container Platform management cluster has a default storage class. For more information, see "Postinstallation storage configuration". The following example shows how to set a default storage class:
oc patch storageclass ocs-storagecluster-ceph-rbd \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ oc patch storageclass ocs-storagecluster-ceph-rbd \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You have a valid pull secret file for the
quay.io/openshift-release-dev
repository. For more information, see "Install OpenShift on any x86_64 platform with user-provisioned infrastructure". - You have installed the hosted control plane command-line interface.
- You have configured a load balancer. For more information, see "Configuring MetalLB".
- For optimal network performance, you are using a network maximum transmission unit (MTU) of 9000 or greater on the OpenShift Container Platform cluster that hosts the KubeVirt virtual machines. If you use a lower MTU setting, network latency and the throughput of the hosted pods are affected. Enable multiqueue on node pools only when the MTU is 9000 or greater.
The multicluster engine Operator has at least one managed OpenShift Container Platform cluster. The
local-cluster
is automatically imported. For more information about thelocal-cluster
, see "Advanced configuration" in the multicluster engine Operator documentation. You can check the status of your hub cluster by running the following command:oc get managedclusters local-cluster
$ oc get managedclusters local-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
On the OpenShift Container Platform cluster that hosts the OpenShift Virtualization virtual machines, you are using a
ReadWriteMany
(RWX) storage class so that live migration can be enabled.
4.3.1.2. Firewall and port requirements Copiar o linkLink copiado para a área de transferência!
Ensure that you meet the firewall and port requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters:
The
kube-apiserver
service runs on port 6443 by default and requires ingress access for communication between the control plane components.-
If you use the
NodePort
publishing strategy, ensure that the node port that is assigned to thekube-apiserver
service is exposed. - If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses.
-
If you use the
-
If you use the
NodePort
publishing strategy, use a firewall rule for theignition-server
andOauth-server
settings. The
konnectivity
agent, which establishes a reverse tunnel to allow bi-directional communication on the hosted cluster, requires egress access to the cluster API server address on port 6443. With that egress access, the agent can reach thekube-apiserver
service.- If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443.
- If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes.
- If you change the default port of 6443, adjust the rules to reflect that change.
- Ensure that you open any ports that are required by the workloads that run in the clusters.
- Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary.
- For production deployments, use a load balancer to simplify access through a single IP address.
4.3.2. Live migration for compute nodes Copiar o linkLink copiado para a área de transferência!
While the management cluster for hosted cluster virtual machines (VMs) is undergoing updates or maintenance, the hosted cluster VMs can be automatically live migrated to prevent disrupting hosted cluster workloads. As a result, the management cluster can be updated without affecting the availability and operation of the KubeVirt platform hosted clusters.
The live migration of KubeVirt VMs is enabled by default provided that the VMs use ReadWriteMany
(RWX) storage for both the root volume and the storage classes that are mapped to the kubevirt-csi
CSI provider.
You can verify that the VMs in a node pool are capable of live migration by checking the KubeVirtNodesLiveMigratable
condition in the status
section of a NodePool
object.
In the following example, the VMs cannot be live migrated because RWX storage is not used.
Example configuration where VMs cannot be live migrated
In the next example, the VMs meet the requirements to be live migrated.
Example configuration where VMs can be live migrated
While live migration can protect VMs from disruption in normal circumstances, events such as infrastructure node failure can result in a hard restart of any VMs that are hosted on the failed node. For live migration to be successful, the source node that a VM is hosted on must be working correctly.
When the VMs in a node pool cannot be live migrated, workload disruption might occur on the hosted cluster during maintenance on the management cluster. By default, the hosted control planes controllers try to drain the workloads that are hosted on KubeVirt VMs that cannot be live migrated before the VMs are stopped. Draining the hosted cluster nodes before stopping the VMs allows pod disruption budgets to protect workload availability within the hosted cluster.
4.3.3. Creating a hosted cluster with the KubeVirt platform Copiar o linkLink copiado para a área de transferência!
With OpenShift Container Platform 4.14 and later, you can create a cluster with KubeVirt, to include creating with an external infrastructure.
4.3.3.1. Creating a hosted cluster with the KubeVirt platform by using the CLI Copiar o linkLink copiado para a área de transferência!
To create a hosted cluster, you can use the hosted control plane command-line interface (CLI), hcp
.
Procedure
Create a hosted cluster with the KubeVirt platform by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for example,
my-hosted-cluster
. - 2
- Specify the node pool replica count, for example,
3
. You must specify the replica count as0
or greater to create the same number of replicas. Otherwise, no node pools are created. - 3
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 4
- Specify a value for memory, for example,
6Gi
. - 5
- Specify a value for CPU, for example,
2
. - 6
- Specify the etcd storage class name, for example,
lvm-storageclass
.
NoteYou can use the
--release-image
flag to set up the hosted cluster with a specific OpenShift Container Platform release.A default node pool is created for the cluster with two virtual machine worker replicas according to the
--node-pool-replicas
flag.After a few moments, verify that the hosted control plane pods are running by entering the following command:
oc -n clusters-<hosted-cluster-name> get pods
$ oc -n clusters-<hosted-cluster-name> get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A hosted cluster that has worker nodes that are backed by KubeVirt virtual machines typically takes 10-15 minutes to be fully provisioned.
Verification
To check the status of the hosted cluster, see the corresponding
HostedCluster
resource by entering the following command:oc get --namespace clusters hostedclusters
$ oc get --namespace clusters hostedclusters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the following example output, which illustrates a fully provisioned
HostedCluster
object:NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters my-hosted-cluster <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available
NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters my-hosted-cluster <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<4.x.0>
with the supported OpenShift Container Platform version that you want to use.
4.3.3.2. Creating a hosted cluster with the KubeVirt platform by using external infrastructure Copiar o linkLink copiado para a área de transferência!
By default, the HyperShift Operator hosts both the control plane pods of the hosted cluster and the KubeVirt worker VMs within the same cluster. With the external infrastructure feature, you can place the worker node VMs on a separate cluster from the control plane pods.
- The management cluster is the OpenShift Container Platform cluster that runs the HyperShift Operator and hosts the control plane pods for a hosted cluster.
- The infrastructure cluster is the OpenShift Container Platform cluster that runs the KubeVirt worker VMs for a hosted cluster.
- By default, the management cluster also acts as the infrastructure cluster that hosts VMs. However, for external infrastructure, the management and infrastructure clusters are different.
Prerequisites
- You must have a namespace on the external infrastructure cluster for the KubeVirt nodes to be hosted in.
-
You must have a
kubeconfig
file for the external infrastructure cluster.
Procedure
You can create a hosted cluster by using the hcp
command-line interface.
To place the KubeVirt worker VMs on the infrastructure cluster, use the
--infra-kubeconfig-file
and--infra-namespace
arguments, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for example,
my-hosted-cluster
. - 2
- Specify the worker count, for example,
2
. - 3
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 4
- Specify a value for memory, for example,
6Gi
. - 5
- Specify a value for CPU, for example,
2
. - 6
- Specify the infrastructure namespace, for example,
clusters-example
. - 7
- Specify the path to your
kubeconfig
file for the infrastructure cluster, for example,/user/name/external-infra-kubeconfig
.
After you enter that command, the control plane pods are hosted on the management cluster that the HyperShift Operator runs on, and the KubeVirt VMs are hosted on a separate infrastructure cluster.
4.3.3.3. Creating a hosted cluster by using the console Copiar o linkLink copiado para a área de transferência!
To create a hosted cluster with the KubeVirt platform by using the console, complete the following steps.
Procedure
- Open the OpenShift Container Platform web console and log in by entering your administrator credentials.
- In the console header, ensure that All Clusters is selected.
- Click Infrastructure > Clusters.
- Click Create cluster > Red Hat OpenShift Virtualization > Hosted.
On the Create cluster page, follow the prompts to enter details about the cluster and node pools.
Note- If you want to use predefined values to automatically populate fields in the console, you can create a OpenShift Virtualization credential. For more information, see Creating a credential for an on-premises environment.
- On the Cluster details page, the pull secret is your OpenShift Container Platform pull secret that you use to access OpenShift Container Platform resources. If you selected a OpenShift Virtualization credential, the pull secret is automatically populated.
Review your entries and click Create.
The Hosted cluster view is displayed.
Verification
- Monitor the deployment of the hosted cluster in the Hosted cluster view. If you do not see information about the hosted cluster, ensure that All Clusters is selected, and click the cluster name.
- Wait until the control plane components are ready. This process can take a few minutes.
- To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster.
4.3.4. Configuring the default ingress and DNS for hosted control planes on OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
Every OpenShift Container Platform cluster includes a default application Ingress Controller, which must have an wildcard DNS record associated with it. By default, hosted clusters that are created by using the HyperShift KubeVirt provider automatically become a subdomain of the OpenShift Container Platform cluster that the KubeVirt virtual machines run on.
For example, your OpenShift Container Platform cluster might have the following default ingress DNS entry:
*.apps.mgmt-cluster.example.com
*.apps.mgmt-cluster.example.com
As a result, a KubeVirt hosted cluster that is named guest
and that runs on that underlying OpenShift Container Platform cluster has the following default ingress:
*.apps.guest.apps.mgmt-cluster.example.com
*.apps.guest.apps.mgmt-cluster.example.com
Procedure
For the default ingress DNS to work properly, the cluster that hosts the KubeVirt virtual machines must allow wildcard DNS routes.
You can configure this behavior by entering the following command:
oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
$ oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
When you use the default hosted cluster ingress, connectivity is limited to HTTPS traffic over port 443. Plain HTTP traffic over port 80 is rejected. This limitation applies to only the default ingress behavior.
4.3.5. Customizing ingress and DNS behavior Copiar o linkLink copiado para a área de transferência!
If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration.
4.3.5.1. Deploying a hosted cluster that specifies the base domain Copiar o linkLink copiado para a área de transferência!
To create a hosted cluster that specifies a base domain, complete the following steps.
Procedure
Enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster.
- 2
- Specify the worker count, for example,
2
. - 3
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 4
- Specify a value for memory, for example,
6Gi
. - 5
- Specify a value for CPU, for example,
2
. - 6
- Specify the base domain, for example,
hypershift.lab
.
As a result, the hosted cluster has an ingress wildcard that is configured for the cluster name and the base domain, for example,
.apps.example.hypershift.lab
. The hosted cluster remains inPartial
status because after you create a hosted cluster with unique base domain, you must configure the required DNS records and load balancer.
Verification
View the status of your hosted cluster by entering the following command:
oc get --namespace clusters hostedclusters
$ oc get --namespace clusters hostedclusters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available
NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the cluster by entering the following commands:
hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig
$ hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc --kubeconfig <hosted_cluster_name>-kubeconfig get co
$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get co
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console <4.x.0> False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress <4.x.0> True False True 28m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console <4.x.0> False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress <4.x.0> True False True 28m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<4.x.0>
with the supported OpenShift Container Platform version that you want to use.
Next steps
To fix the errors in the output, complete the steps in "Setting up the load balancer" and "Setting up a wildcard DNS".
If your hosted cluster is on bare metal, you might need MetalLB to set up load balancer services. For more information, see "Configuring MetalLB".
4.3.5.2. Setting up the load balancer Copiar o linkLink copiado para a área de transferência!
Set up the load balancer service that routes ingress traffic to the KubeVirt VMs and assigns a wildcard DNS entry to the load balancer IP address.
Procedure
A
NodePort
service that exposes the hosted cluster ingress already exists. You can export the node ports and create the load balancer service that targets those ports.Get the HTTP node port by entering the following command:
oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}'
$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the HTTP node port value to use in the next step.
Get the HTTPS node port by entering the following command:
oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}'
$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the HTTPS node port value to use in the next step.
Enter the following information in a YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the load balancer service by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.5.3. Setting up a wildcard DNS Copiar o linkLink copiado para a área de transferência!
Set up a wildcard DNS record or CNAME that references the external IP of the load balancer service.
Procedure
Get the external IP address by entering the following command:
oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
$ oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
192.168.20.30
192.168.20.30
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a wildcard DNS entry that references the external IP address. View the following example DNS entry:
*.apps.<hosted_cluster_name\>.<base_domain\>.
*.apps.<hosted_cluster_name\>.<base_domain\>.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The DNS entry must be able to route inside and outside of the cluster.
DNS resolutions example
dig +short test.apps.example.hypershift.lab 192.168.20.30
dig +short test.apps.example.hypershift.lab 192.168.20.30
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that hosted cluster status has moved from
Partial
toCompleted
by entering the following command:oc get --namespace clusters hostedclusters
$ oc get --namespace clusters hostedclusters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available
NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<4.x.0>
with the supported OpenShift Container Platform version that you want to use.
4.3.6. Configuring MetalLB Copiar o linkLink copiado para a área de transferência!
You must install the MetalLB Operator before you configure MetalLB.
Procedure
Complete the following steps to configure MetalLB on your hosted cluster:
Create a
MetalLB
resource by saving the following sample YAML content in theconfigure-metallb.yaml
file:apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system
apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the YAML content by entering the following command:
oc apply -f configure-metallb.yaml
$ oc apply -f configure-metallb.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
metallb.metallb.io/metallb created
metallb.metallb.io/metallb created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
IPAddressPool
resource by saving the following sample YAML content in thecreate-ip-address-pool.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Create an address pool with an available range of IP addresses within the node network. Replace the IP address range with an unused pool of available IP addresses in your network.
Apply the YAML content by entering the following command:
oc apply -f create-ip-address-pool.yaml
$ oc apply -f create-ip-address-pool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ipaddresspool.metallb.io/metallb created
ipaddresspool.metallb.io/metallb created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
L2Advertisement
resource by saving the following sample YAML content in thel2advertisement.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the YAML content by entering the following command:
oc apply -f l2advertisement.yaml
$ oc apply -f l2advertisement.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
l2advertisement.metallb.io/metallb created
l2advertisement.metallb.io/metallb created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.7. Configuring additional networks, guaranteed CPUs, and VM scheduling for node pools Copiar o linkLink copiado para a área de transferência!
If you need to configure additional networks for node pools, request a guaranteed CPU access for Virtual Machines (VMs), or manage scheduling of KubeVirt VMs, see the following procedures.
4.3.7.1. Adding multiple networks to a node pool Copiar o linkLink copiado para a área de transferência!
By default, nodes generated by a node pool are attached to the pod network. You can attach additional networks to the nodes by using Multus and NetworkAttachmentDefinitions.
Procedure
To add multiple networks to nodes, use the
--additional-network
argument by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for example,
my-hosted-cluster
. - 2
- Specify your worker node count, for example,
2
. - 3
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 4
- Specify the memory value, for example,
8Gi
. - 5
- Specify the CPU value, for example,
2
. - 6
- Set the value of the
–additional-network
argument toname:<namespace/name>
. Replace<namespace/name>
with a namespace and name of your NetworkAttachmentDefinitions.
4.3.7.1.1. Using an additional network as default Copiar o linkLink copiado para a área de transferência!
You can add your additional network as a default network for the nodes by disabling the default pod network.
Procedure
To add an additional network as default to your nodes, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for example,
my-hosted-cluster
. - 2
- Specify your worker node count, for example,
2
. - 3
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 4
- Specify the memory value, for example,
8Gi
. - 5
- Specify the CPU value, for example,
2
. - 6
- The
--attach-default-network false
argument disables the default pod network. - 7
- Specify the additional network that you want to add to your nodes, for example,
name:my-namespace/my-network
.
4.3.7.2. Requesting guaranteed CPU resources Copiar o linkLink copiado para a área de transferência!
By default, KubeVirt VMs might share their CPUs with other workloads on a node. This might impact performance of a VM. To avoid the performance impact, you can request a guaranteed CPU access for VMs.
Procedure
To request guaranteed CPU resources, set the
--qos-class
argument toGuaranteed
by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for example,
my-hosted-cluster
. - 2
- Specify your worker node count, for example,
2
. - 3
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 4
- Specify the memory value, for example,
8Gi
. - 5
- Specify the CPU value, for example,
2
. - 6
- The
--qos-class Guaranteed
argument guarantees that the specified number of CPU resources are assigned to VMs.
4.3.7.3. Scheduling KubeVirt VMs on a set of nodes Copiar o linkLink copiado para a área de transferência!
By default, KubeVirt VMs created by a node pool are scheduled to any available nodes. You can schedule KubeVirt VMs on a specific set of nodes that has enough capacity to run the VM.
Procedure
To schedule KubeVirt VMs within a node pool on a specific set of nodes, use the
--vm-node-selector
argument by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for example,
my-hosted-cluster
. - 2
- Specify your worker node count, for example,
2
. - 3
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 4
- Specify the memory value, for example,
8Gi
. - 5
- Specify the CPU value, for example,
2
. - 6
- The
--vm-node-selector
flag defines a specific set of nodes that contains the key-value pairs. Replace<label_key>
with the keys of your labels and replace<label_value>
with the values of your labels.
4.3.8. Scaling a node pool Copiar o linkLink copiado para a área de transferência!
You can manually scale a node pool by using the oc scale
command.
Procedure
Run the following command:
NODEPOOL_NAME=${CLUSTER_NAME}-work NODEPOOL_REPLICAS=5 $ oc scale nodepool/$NODEPOOL_NAME --namespace clusters \ --replicas=$NODEPOOL_REPLICAS
NODEPOOL_NAME=${CLUSTER_NAME}-work NODEPOOL_REPLICAS=5 $ oc scale nodepool/$NODEPOOL_NAME --namespace clusters \ --replicas=$NODEPOOL_REPLICAS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After a few moments, enter the following command to see the status of the node pool:
oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
$ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.8.1. Adding node pools Copiar o linkLink copiado para a área de transferência!
You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as memory and CPU requirements.
Procedure
To create a node pool, enter the following information. In this example, the node pool has more CPUs assigned to the VMs:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the node pool by listing
nodepool
resources in theclusters
namespace:oc get nodepools --namespace clusters
$ oc get nodepools --namespace clusters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 False False True True Minimum availability requires 2 replicas, current 0 available
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 False False True True Minimum availability requires 2 replicas, current 0 available
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<4.x.0>
with the supported OpenShift Container Platform version that you want to use.
Verification
After some time, you can check the status of the node pool by entering the following command:
oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
$ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the node pool is in the status that you expect by entering this command:
oc get nodepools --namespace clusters
$ oc get nodepools --namespace clusters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 2 False False <4.x.0>
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 2 False False <4.x.0>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<4.x.0>
with the supported OpenShift Container Platform version that you want to use.
4.3.9. Verifying hosted cluster creation on OpenShift Virtualization Copiar o linkLink copiado para a área de transferência!
To verify that your hosted cluster was successfully created, complete the following steps.
Procedure
Verify that the
HostedCluster
resource transitioned to thecompleted
state by entering the following command:oc get --namespace clusters hostedclusters <hosted_cluster_name>
$ oc get --namespace clusters hostedclusters <hosted_cluster_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters example 4.12.2 example-admin-kubeconfig Completed True False The hosted control plane is available
NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters example 4.12.2 example-admin-kubeconfig Completed True False The hosted control plane is available
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all the cluster operators in the hosted cluster are online by entering the following commands:
hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig
$ hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get co --kubeconfig=<hosted_cluster_name>-kubeconfig
$ oc get co --kubeconfig=<hosted_cluster_name>-kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.10. Configuring a custom API server certificate in a hosted cluster Copiar o linkLink copiado para a área de transferência!
To configure a custom certificate for the API server, specify the certificate details in the spec.configuration.apiServer
section of your HostedCluster
configuration.
You can configure a custom certificate during either day-1 or day-2 operations. However, because the service publishing strategy is immutable after you set it during hosted cluster creation, you must know what the hostname is for the Kubernetes API server that you plan to configure.
Prerequisites
You created a Kubernetes secret that contains your custom certificate in the management cluster. The secret contains the following keys:
-
tls.crt
: The certificate -
tls.key
: The private key
-
-
If your
HostedCluster
configuration includes a service publishing strategy that uses a load balancer, ensure that the Subject Alternative Names (SANs) of the certificate do not conflict with the internal API endpoint (api-int
). The internal API endpoint is automatically created and managed by your platform. If you use the same hostname in both the custom certificate and the internal API endpoint, routing conflicts can occur. The only exception to this rule is when you use AWS as the provider with eitherPrivate
orPublicAndPrivate
configurations. In those cases, the SAN conflict is managed by the platform. - The certificate must be valid for the external API endpoint.
- The validity period of the certificate aligns with your cluster’s expected life cycle.
Procedure
Create a secret with your custom certificate by entering the following command:
oc create secret tls sample-hosted-kas-custom-cert \ --cert=path/to/cert.crt \ --key=path/to/key.key \ -n <hosted_cluster_namespace>
$ oc create secret tls sample-hosted-kas-custom-cert \ --cert=path/to/cert.crt \ --key=path/to/key.key \ -n <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update your
HostedCluster
configuration with the custom certificate details, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes to your
HostedCluster
configuration by entering the following command:oc apply -f <hosted_cluster_config>.yaml
$ oc apply -f <hosted_cluster_config>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Check the API server pods to ensure that the new certificate is mounted.
- Test the connection to the API server by using the custom domain name.
-
Verify the certificate details in your browser or by using tools such as
openssl
.
4.4. Deploying hosted control planes on non-bare-metal agent machines Copiar o linkLink copiado para a área de transferência!
You can deploy hosted control planes by configuring a cluster to function as a hosting cluster. The hosting cluster is an OpenShift Container Platform cluster where the control planes are hosted. The hosting cluster is also known as the management cluster.
Hosted control planes on non-bare-metal agent machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The management cluster is not the same thing as the managed cluster. A managed cluster is a cluster that the hub cluster manages.
The hosted control planes feature is enabled by default.
The multicluster engine Operator supports only the default local-cluster
managed hub cluster. On Red Hat Advanced Cluster Management (RHACM) 2.10, you can use the local-cluster
managed hub cluster as the hosting cluster.
A hosted cluster is an OpenShift Container Platform cluster with its API endpoint and control plane that are hosted on the hosting cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hcp
command-line interface (CLI) to create a hosted cluster.
The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator".
4.4.1. Preparing to deploy hosted control planes on non-bare-metal agent machines Copiar o linkLink copiado para a área de transferência!
As you prepare to deploy hosted control planes on bare metal, consider the following information:
- You can add agent machines as a worker node to a hosted cluster by using the Agent platform. Agent machine represents a host booted with a Discovery Image and ready to be provisioned as an OpenShift Container Platform node. The Agent platform is part of the central infrastructure management service. For more information, see Enabling the central infrastructure management service.
- All hosts that are not bare metal require a manual boot with a Discovery Image ISO that the central infrastructure management provides.
- When you scale up the node pool, a machine is created for every replica. For every machine, the Cluster API provider finds and installs an Agent that is approved, is passing validations, is not currently in use, and meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions.
- When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you can reuse the Agents, you must restart them by using the Discovery image.
- When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control planes etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using logical volume manager storage" in the OpenShift Container Platform documentation.
4.4.1.1. Prerequisites for deploying hosted control planes on non-bare-metal agent machines Copiar o linkLink copiado para a área de transferência!
Before you deploy hosted control planes on non-bare-metal agent machines, ensure you meet the following prerequisites:
- You must have multicluster engine for Kubernetes Operator 2.5 or later installed on an OpenShift Container Platform cluster. You can install the multicluster engine Operator as an Operator from the OpenShift Container Platform OperatorHub.
You must have at least one managed OpenShift Container Platform cluster for the multicluster engine Operator. The
local-cluster
management cluster is automatically imported. For more information about thelocal-cluster
, see Advanced configuration in the Red Hat Advanced Cluster Management documentation. You can check the status of your management cluster by running the following command:oc get managedclusters local-cluster
$ oc get managedclusters local-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You have enabled central infrastructure management. For more information, see Enabling the central infrastructure management service in the Red Hat Advanced Cluster Management documentation.
-
You have installed the
hcp
command-line interface. - Your hosted cluster has a cluster-wide unique name.
- You are running the management cluster and workers on the same infrastructure.
4.4.1.2. Firewall, port, and service requirements for non-bare-metal agent machines Copiar o linkLink copiado para a área de transferência!
You must meet the firewall and port requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters.
Services run on their default ports. However, if you use the NodePort
publishing strategy, services run on the port that is assigned by the NodePort
service.
Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary. For production deployments, use a load balancer to simplify access through a single IP address.
A hosted control plane exposes the following services on non-bare-metal agent machines:
APIServer
-
The
APIServer
service runs on port 6443 by default and requires ingress access for communication between the control plane components. - If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses.
-
The
OAuthServer
-
The
OAuthServer
service runs on port 443 by default when you use the route and ingress to expose the service. -
If you use the
NodePort
publishing strategy, use a firewall rule for theOAuthServer
service.
-
The
Konnectivity
-
The
Konnectivity
service runs on port 443 by default when you use the route and ingress to expose the service. -
The
Konnectivity
agent establishes a reverse tunnel to allow the control plane to access the network for the hosted cluster. The agent uses egress to connect to theKonnectivity
server. The server is exposed by using either a route on port 443 or a manually assignedNodePort
. - If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443.
- If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes.
-
The
Ignition
-
The
Ignition
service runs on port 443 by default when you use the route and ingress to expose the service. -
If you use the
NodePort
publishing strategy, use a firewall rule for theIgnition
service.
-
The
You do not need the following services on non-bare-metal agent machines:
-
OVNSbDb
-
OIDC
4.4.1.3. Infrastructure requirements for non-bare-metal agent machines Copiar o linkLink copiado para a área de transferência!
The Agent platform does not create any infrastructure, but it has the following infrastructure requirements:
- Agents: An Agent represents a host that is booted with a discovery image and is ready to be provisioned as an OpenShift Container Platform node.
- DNS: The API and ingress endpoints must be routable.
4.4.2. Configuring DNS on non-bare-metal agent machines Copiar o linkLink copiado para a área de transferência!
The API Server for the hosted cluster is exposed as a NodePort
service. A DNS entry must exist for api.<hosted_cluster_name>.<basedomain>
that points to destination where the API Server can be reached.
The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods.
If you are configuring DNS for a connected environment on an IPv4 network, see the following example DNS configuration:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are configuring DNS for a disconnected environment on an IPv6 network, see the following example DNS configuration:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6. See the following example DNS configuration:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.3. Creating a hosted cluster on non-bare-metal agent machines by using the CLI Copiar o linkLink copiado para a área de transferência!
When you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or import one.
As you create a hosted cluster, review the following guidelines:
- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it.
-
Do not use
clusters
as a hosted cluster name. - A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.
Procedure
Create the hosted control plane namespace by entering the following command:
oc create ns <hosted_cluster_namespace>-<hosted_cluster_name>
$ oc create ns <hosted_cluster_namespace>-<hosted_cluster_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<hosted_cluster_namespace>
with your hosted cluster namespace name, for example,clusters
. Replace<hosted_cluster_name>
with your hosted cluster name.
Create a hosted cluster by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for instance,
example
. - 2
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 3
- Specify your hosted control plane namespace, for example,
clusters-example
. Ensure that agents are available in this namespace by using theoc get agent -n <hosted-control-plane-namespace>
command. - 4
- Specify your base domain, for example,
krnl.es
. - 5
- The
--api-server-address
flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the--api-server-address
flag, you must log in to connect to the management cluster. - 6
- Verify that you have a default storage class configured for your cluster. Otherwise, you might end up with pending PVCs. Specify the etcd storage class name, for example,
lvm-storageclass
. - 7
- Specify the path to your SSH public key. The default file path is
~/.ssh/id_rsa.pub
. - 8
- Specify your hosted cluster namespace.
- 9
- Specify the availability policy for the hosted control plane components. Supported options are
SingleReplica
andHighlyAvailable
. The default value isHighlyAvailable
. - 10
- Specify the supported OpenShift Container Platform version that you want to use, for example,
4.17.0-multi
. - 11
- Specify the node pool replica count, for example,
3
. You must specify the replica count as0
or greater to create the same number of replicas. Otherwise, no node pools are created.
Verification
After a few moments, verify that your hosted control plane pods are up and running by entering the following command:
oc -n <hosted_cluster_namespace>-<hosted_cluster_name> get pods
$ oc -n <hosted_cluster_namespace>-<hosted_cluster_name> get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s control-plane-operator-f6b4c8465-4k5dh 1/1 Running 0 4m32s
NAME READY STATUS RESTARTS AGE catalog-operator-6cd867cc7-phb2q 2/2 Running 0 2m50s control-plane-operator-f6b4c8465-4k5dh 1/1 Running 0 4m32s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.3.1. Creating a hosted cluster on non-bare-metal agent machines by using the web console Copiar o linkLink copiado para a área de transferência!
You can create a hosted cluster on non-bare-metal agent machines by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster with
cluster-admin
privileges. - You have access to the OpenShift Container Platform web console.
Procedure
- Open the OpenShift Container Platform web console and log in by entering your administrator credentials.
- In the console header, select All Clusters.
-
Click Infrastructure
Clusters. Click Create cluster Host inventory
Hosted control plane. The Create cluster page is displayed.
- On the Create cluster page, follow the prompts to enter details about the cluster, node pools, networking, and automation.
As you enter details about the cluster, you might find the following tips useful:
- If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see Creating a credential for an on-premises environment.
- On the Cluster details page, the pull secret is your OpenShift Container Platform pull secret that you use to access OpenShift Container Platform resources. If you selected a host inventory credential, the pull secret is automatically populated.
- On the Node pools page, the namespace contains the hosts for the node pool. If you created a host inventory by using the console, the console creates a dedicated namespace.
On the Networking page, you select an API server publishing strategy. The API server for the hosted cluster can be exposed either by using an existing load balancer or as a service of the
NodePort
type. A DNS entry must exist for theapi.<hosted_cluster_name>.<basedomain>
setting that points to the destination where the API server can be reached. This entry can be a record that points to one of the nodes in the management cluster or a record that points to a load balancer that redirects incoming traffic to the Ingress pods.- Review your entries and click Create.
The Hosted cluster view is displayed.
- Monitor the deployment of the hosted cluster in the Hosted cluster view. If you do not see information about the hosted cluster, ensure that All Clusters is selected, and click the cluster name. Wait until the control plane components are ready. This process can take a few minutes.
- To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster.
Next steps
- To access the web console, see Accessing the web console.
4.4.3.2. Creating a hosted cluster on bare metal by using a mirror registry Copiar o linkLink copiado para a área de transferência!
You can use a mirror registry to create a hosted cluster on bare metal by specifying the --image-content-sources
flag in the hcp create cluster
command.
Procedure
Create a YAML file to define Image Content Source Policies (ICSP). See the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the file as
icsp.yaml
. This file contains your mirror registries. To create a hosted cluster by using your mirror registries, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, for instance,
example
. - 2
- Specify the path to your pull secret, for example,
/user/name/pullsecret
. - 3
- Specify your hosted control plane namespace, for example,
clusters-example
. Ensure that agents are available in this namespace by using theoc get agent -n <hosted-control-plane-namespace>
command. - 4
- Specify your base domain, for example,
krnl.es
. - 5
- The
--api-server-address
flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the--api-server-address
flag, you must log in to connect to the management cluster. - 6
- Specify the
icsp.yaml
file that defines ICSP and your mirror registries. - 7
- Specify the path to your SSH public key. The default file path is
~/.ssh/id_rsa.pub
. - 8
- Specify your hosted cluster namespace.
- 9
- Specify the supported OpenShift Container Platform version that you want to use, for example,
4.17.0-multi
. If you are using a disconnected environment, replace<ocp_release_image>
with the digest image. To extract the OpenShift Container Platform release image digest, see "Extracting the OpenShift Container Platform release image digest".
Next steps
- To create credentials that you can reuse when you create a hosted cluster with the console, see Creating a credential for an on-premises environment.
- To access a hosted cluster, see Accessing the hosted cluster.
- To add hosts to the host inventory by using the Discovery Image, see Adding hosts to the host inventory by using the Discovery Image.
- To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest.
4.4.4. Verifying hosted cluster creation on non-bare-metal agent machines Copiar o linkLink copiado para a área de transferência!
After the deployment process is complete, you can verify that the hosted cluster was created successfully. Follow these steps a few minutes after you create the hosted cluster.
Procedure
Obtain the
kubeconfig
file for your new hosted cluster by entering the following command:oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig --to=- \ > kubeconfig-<hosted_cluster_name>
$ oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig --to=- \ > kubeconfig-<hosted_cluster_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
kubeconfig
file to view the cluster Operators of the hosted cluster. Enter the following command:oc get co --kubeconfig=kubeconfig-<hosted_cluster_name>
$ oc get co --kubeconfig=kubeconfig-<hosted_cluster_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.10.26 True False False 2m38s csi-snapshot-controller 4.10.26 True False False 4m3s dns 4.10.26 True False False 2m52s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.10.26 True False False 2m38s csi-snapshot-controller 4.10.26 True False False 4m3s dns 4.10.26 True False False 2m52s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the running pods on your hosted cluster by entering the following command:
oc get pods -A --kubeconfig=kubeconfig-<hosted_cluster_name>
$ oc get pods -A --kubeconfig=kubeconfig-<hosted_cluster_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system konnectivity-agent-khlqv 0/1 Running 0 3m52s openshift-cluster-samples-operator cluster-samples-operator-6b5bcb9dff-kpnbc 2/2 Running 0 20m openshift-monitoring alertmanager-main-0 6/6 Running 0 100s openshift-monitoring openshift-state-metrics-677b9fb74f-qqp6g 3/3 Running 0 104s
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system konnectivity-agent-khlqv 0/1 Running 0 3m52s openshift-cluster-samples-operator cluster-samples-operator-6b5bcb9dff-kpnbc 2/2 Running 0 20m openshift-monitoring alertmanager-main-0 6/6 Running 0 100s openshift-monitoring openshift-state-metrics-677b9fb74f-qqp6g 3/3 Running 0 104s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.5. Configuring a custom API server certificate in a hosted cluster Copiar o linkLink copiado para a área de transferência!
To configure a custom certificate for the API server, specify the certificate details in the spec.configuration.apiServer
section of your HostedCluster
configuration.
You can configure a custom certificate during either day-1 or day-2 operations. However, because the service publishing strategy is immutable after you set it during hosted cluster creation, you must know what the hostname is for the Kubernetes API server that you plan to configure.
Prerequisites
You created a Kubernetes secret that contains your custom certificate in the management cluster. The secret contains the following keys:
-
tls.crt
: The certificate -
tls.key
: The private key
-
-
If your
HostedCluster
configuration includes a service publishing strategy that uses a load balancer, ensure that the Subject Alternative Names (SANs) of the certificate do not conflict with the internal API endpoint (api-int
). The internal API endpoint is automatically created and managed by your platform. If you use the same hostname in both the custom certificate and the internal API endpoint, routing conflicts can occur. The only exception to this rule is when you use AWS as the provider with eitherPrivate
orPublicAndPrivate
configurations. In those cases, the SAN conflict is managed by the platform. - The certificate must be valid for the external API endpoint.
- The validity period of the certificate aligns with your cluster’s expected life cycle.
Procedure
Create a secret with your custom certificate by entering the following command:
oc create secret tls sample-hosted-kas-custom-cert \ --cert=path/to/cert.crt \ --key=path/to/key.key \ -n <hosted_cluster_namespace>
$ oc create secret tls sample-hosted-kas-custom-cert \ --cert=path/to/cert.crt \ --key=path/to/key.key \ -n <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update your
HostedCluster
configuration with the custom certificate details, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes to your
HostedCluster
configuration by entering the following command:oc apply -f <hosted_cluster_config>.yaml
$ oc apply -f <hosted_cluster_config>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Check the API server pods to ensure that the new certificate is mounted.
- Test the connection to the API server by using the custom domain name.
-
Verify the certificate details in your browser or by using tools such as
openssl
.
4.5. Deploying hosted control planes on IBM Z Copiar o linkLink copiado para a área de transferência!
You can deploy hosted control planes by configuring a cluster to function as a management cluster. The management cluster is the OpenShift Container Platform cluster where the control planes are hosted. The management cluster is also known as the hosting cluster.
The management cluster is not the managed cluster. A managed cluster is a cluster that the hub cluster manages.
You can convert a managed cluster to a management cluster by using the hypershift
add-on to deploy the HyperShift Operator on that cluster. Then, you can start to create the hosted cluster.
The multicluster engine Operator supports only the default local-cluster
, which is a hub cluster that is managed, and the hub cluster as the management cluster.
To provision hosted control planes on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service".
Each IBM Z system host must be started with the PXE images provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
When you create a hosted cluster with the Agent platform, HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
4.5.1. Prerequisites to configure hosted control planes on IBM Z Copiar o linkLink copiado para a área de transferência!
- The multicluster engine for Kubernetes Operator version 2.5 or later must be installed on an OpenShift Container Platform cluster. You can install multicluster engine Operator as an Operator from the OpenShift Container Platform OperatorHub.
The multicluster engine Operator must have at least one managed OpenShift Container Platform cluster. The
local-cluster
is automatically imported in multicluster engine Operator 2.5 and later. For more information about thelocal-cluster
, see Advanced configuration in the Red Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:oc get managedclusters local-cluster
$ oc get managedclusters local-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You need a hosting cluster with at least three worker nodes to run the HyperShift Operator.
- You need to enable the central infrastructure management service. For more information, see Enabling the central infrastructure management service.
- You need to install the hosted control plane command-line interface. For more information, see Installing the hosted control plane command-line interface.
4.5.2. IBM Z infrastructure requirements Copiar o linkLink copiado para a área de transferência!
The Agent platform does not create any infrastructure, but requires the following resources for infrastructure:
- Agents: An Agent represents a host that is booted with a discovery image, or PXE image and is ready to be provisioned as an OpenShift Container Platform node.
- DNS: The API and Ingress endpoints must be routable.
The hosted control planes feature is enabled by default. If you disabled the feature and want to manually enable it, or if you need to disable the feature, see Enabling or disabling the hosted control planes feature.
4.5.3. DNS configuration for hosted control planes on IBM Z Copiar o linkLink copiado para a área de transferência!
The API server for the hosted cluster is exposed as a NodePort
service. A DNS entry must exist for the api.<hosted_cluster_name>.<base_domain>
that points to the destination where the API server is reachable.
The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane.
The entry can also point to a load balancer deployed to redirect incoming traffic to the Ingress pods.
See the following example of a DNS configuration:
cat /var/named/<example.krnl.es.zone>
$ cat /var/named/<example.krnl.es.zone>
Example output
- 1
- The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes.
For IBM z/VM, add IP addresses that correspond to the IP address of the agent.
compute-0 IN A 1xx.2x.2xx.1yy compute-1 IN A 1xx.2x.2xx.1yy
compute-0 IN A 1xx.2x.2xx.1yy
compute-1 IN A 1xx.2x.2xx.1yy
4.5.4. Creating a hosted cluster on bare metal Copiar o linkLink copiado para a área de transferência!
You can create a hosted cluster or import one. When the Assisted Installer is enabled as an add-on to multicluster engine Operator and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
4.5.4.1. Creating a hosted cluster by using the CLI Copiar o linkLink copiado para a área de transferência!
On bare-metal infrastructure, you can create or import a hosted cluster. After you enable the Assisted Installer as an add-on to multicluster engine Operator and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. The Agent Cluster API provider connects a management cluster that hosts the control plane and a hosted cluster that consists of only the compute nodes.
Prerequisites
- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster. Otherwise, the multicluster engine Operator cannot manage the hosted cluster.
-
Do not use the word
clusters
as a hosted cluster name. - You cannot create a hosted cluster in the namespace of a multicluster engine Operator managed cluster.
- For best security and management practices, create a hosted cluster separate from other hosted clusters.
- Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).
-
By default when you use the
hcp create cluster agent
command, the command creates a hosted cluster with configured node ports. The preferred publishing strategy for hosted clusters on bare metal exposes services through a load balancer. If you create a hosted cluster by using the web console or by using Red Hat Advanced Cluster Management, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify theservicePublishingStrategy
information in theHostedCluster
custom resource. Ensure that you meet the requirements described in "Requirements for hosted control planes on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:
oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
$ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
$ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that you have added bare-metal nodes to a hardware inventory.
Procedure
Create a namespace by entering the following command:
oc create ns <hosted_cluster_namespace>
$ oc create ns <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<hosted_cluster_namespace>
with an identifier for your hosted cluster namespace. The HyperShift Operator creates the namespace. During the hosted cluster creation process on bare-metal infrastructure, a generated Cluster API provider role requires that the namespace already exists.Create the configuration file for your hosted cluster by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, such as
example
. - 2
- Specify the path to your pull secret, such as
/user/name/pullsecret
. - 3
- Specify your hosted control plane namespace, such as
clusters-example
. Ensure that agents are available in this namespace by using theoc get agent -n <hosted_control_plane_namespace>
command. - 4
- Specify your base domain, such as
krnl.es
. - 5
- The
--api-server-address
flag defines the IP address that gets used for the Kubernetes API communication in the hosted cluster. If you do not set the--api-server-address
flag, you must log in to connect to the management cluster. - 6
- Specify the etcd storage class name, such as
lvm-storageclass
. - 7
- Specify the path to your SSH public key. The default file path is
~/.ssh/id_rsa.pub
. - 8
- Specify your hosted cluster namespace.
- 9
- Specify the availability policy for the hosted control plane components. Supported options are
SingleReplica
andHighlyAvailable
. The default value isHighlyAvailable
. - 10
- Specify the supported OpenShift Container Platform version that you want to use, such as
4.19.0-multi
. If you are using a disconnected environment, replace<ocp_release_image>
with the digest image. To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest. - 11
- Specify the node pool replica count, such as
3
. You must specify the replica count as0
or greater to create the same number of replicas. Otherwise, you do not create node pools. - 12
- After the
--ssh-key
flag, specify the path to the SSH key, such asuser/.ssh/id_rsa
.
Configure the service publishing strategy. By default, hosted clusters use the
NodePort
service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.-
If you are using the default
NodePort
strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal". For production environments, use the
LoadBalancer
strategy because this strategy provides certificate handling and automatic DNS resolution. The following example demonstrates changing the service publishingLoadBalancer
strategy in your hosted cluster configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify
LoadBalancer
as the API Server type. For all other services, specifyRoute
as the type.
-
If you are using the default
Apply the changes to the hosted cluster configuration file by entering the following command:
oc apply -f hosted_cluster_config.yaml
$ oc apply -f hosted_cluster_config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check for the creation of the hosted cluster, node pools, and pods by entering the following commands:
oc get hostedcluster \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
$ oc get hostedcluster \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get nodepool \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
$ oc get nodepool \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -n <hosted_cluster_namespace>
$ oc get pods -n <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Confirm that the hosted cluster is ready. The status of
Available: True
indicates the readiness of the cluster and the node pool status showsAllMachinesReady: True
. These statuses indicate the healthiness of all cluster Operators. Install MetalLB in the hosted cluster:
Extract the
kubeconfig
file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands:Copy to Clipboard Copied! Toggle word wrap Toggle overflow export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
$ export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the MetalLB Operator by creating the
install-metallb-operator.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the file by entering the following command:
oc apply -f install-metallb-operator.yaml
$ oc apply -f install-metallb-operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the MetalLB IP address pool by creating the
deploy-metallb-ipaddresspool.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by entering the following command:
oc apply -f deploy-metallb-ipaddresspool.yaml
$ oc apply -f deploy-metallb-ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the installation of MetalLB by checking the Operator status, the IP address pool, and the
L2Advertisement
resource by entering the following commands:oc get pods -n metallb-system
$ oc get pods -n metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ipaddresspool -n metallb-system
$ oc get ipaddresspool -n metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get l2advertisement -n metallb-system
$ oc get l2advertisement -n metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the load balancer for ingress:
Create the
ingress-loadbalancer.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by entering the following command:
oc apply -f ingress-loadbalancer.yaml
$ oc apply -f ingress-loadbalancer.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the load balancer service works as expected by entering the following command:
oc get svc metallb-ingress -n openshift-ingress
$ oc get svc metallb-ingress -n openshift-ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metallb-ingress LoadBalancer 172.31.127.129 10.11.176.71 80:30961/TCP,443:32090/TCP 16h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metallb-ingress LoadBalancer 172.31.127.129 10.11.176.71 80:30961/TCP,443:32090/TCP 16h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the DNS to work with the load balancer:
-
Configure the DNS for the
apps
domain by pointing the*.apps.<hosted_cluster_namespace>.<base_domain>
wildcard DNS record to the load balancer IP address. Verify the DNS resolution by entering the following command:
nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
$ nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Server: 10.11.176.1 Address: 10.11.176.1#53 Name: console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com Address: 10.11.176.71
Server: 10.11.176.1 Address: 10.11.176.1#53 Name: console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com Address: 10.11.176.71
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Configure the DNS for the
Verification
Check the cluster Operators by entering the following command:
oc get clusteroperators
$ oc get clusteroperators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that all Operators show
AVAILABLE: True
,PROGRESSING: False
, andDEGRADED: False
.Check the nodes by entering the following command:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that each node has the
READY
status.Test access to the console by entering the following URL in a web browser:
https://console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain>
https://console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.5. Creating an InfraEnv resource for hosted control planes on IBM Z Copiar o linkLink copiado para a área de transferência!
An InfraEnv
is an environment where hosts that are booted with PXE images can join as agents. In this case, the agents are created in the same namespace as your hosted control plane.
Procedure
Create a YAML file to contain the configuration. See the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the file as
infraenv-config.yaml
. Apply the configuration by entering the following command:
oc apply -f infraenv-config.yaml
$ oc apply -f infraenv-config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the URL to download the PXE images, such as,
initrd.img
,kernel.img
, orrootfs.img
, which allows IBM Z machines to join as agents, enter the following command:oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json
$ oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.6. Adding IBM Z agents to the InfraEnv resource Copiar o linkLink copiado para a área de transferência!
To attach compute nodes to a hosted control plane, create agents that help you to scale the node pool. Adding agents in an IBM Z environment requires additional steps, which are described in detail in this section.
Unless stated otherwise, these procedures apply to both z/VM and RHEL KVM installations on IBM Z and IBM LinuxONE.
4.5.6.1. Adding IBM Z KVM as agents Copiar o linkLink copiado para a área de transferência!
For IBM Z with KVM, run the following command to start your IBM Z environment with the downloaded PXE images from the InfraEnv
resource. After the Agents are created, the host communicates with the Assisted Service and registers in the same namespace as the InfraEnv
resource on the management cluster.
Procedure
Run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For ISO boot, download ISO from the
InfraEnv
resource and boot the nodes by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.6.2. Adding IBM Z LPAR as agents Copiar o linkLink copiado para a área de transferência!
You can add the Logical Partition (LPAR) on IBM Z or IBM LinuxONE as a compute node to a hosted control plane.
Procedure
Create a boot parameter file for the agents:
Example parameter file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For the
coreos.live.rootfs_url
artifact, specify the matchingrootfs
artifact for thekernel
andinitramfs
that you are starting. Only HTTP and HTTPS protocols are supported. - 2
- For the
ip
parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE. - 3
- For installations on DASD-type disks, use
rd.dasd
to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, userd.zfcp=<adapter>,<wwpn>,<lun>
to specify the FCP disk where RHCOS is to be installed. - 4
- Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets.
Download the
.ins
andinitrd.img.addrsize
files from theInfraEnv
resource.By default, the URL for the
.ins
andinitrd.img.addrsize
files is not available in theInfraEnv
resource. You must edit the URL to fetch those artifacts.Update the kernel URL endpoint to include
ins-file
by running the following command:curl -k -L -o generic.ins "< url for ins-file >"
$ curl -k -L -o generic.ins "< url for ins-file >"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example URL
https://…/boot-artifacts/ins-file?arch=s390x&version=4.17.0
https://…/boot-artifacts/ins-file?arch=s390x&version=4.17.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
initrd
URL endpoint to includes390x-initrd-addrsize
:Example URL
https://…./s390x-initrd-addrsize?api_key=<api-key>&arch=s390x&version=4.17.0
https://…./s390x-initrd-addrsize?api_key=<api-key>&arch=s390x&version=4.17.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Transfer the
initrd
,kernel
,generic.ins
, andinitrd.img.addrsize
parameter files to the file server. For more information about how to transfer the files with FTP and boot, see "Installing in an LPAR". - Start the machine.
- Repeat the procedure for all other machines in the cluster.
4.5.6.3. Adding IBM z/VM as agents Copiar o linkLink copiado para a área de transferência!
If you want to use a static IP for z/VM guest, you must configure the NMStateConfig
attribute for the z/VM agent so that the IP parameter persists in the second start.
Complete the following steps to start your IBM Z environment with the downloaded PXE images from the InfraEnv
resource. After the Agents are created, the host communicates with the Assisted Service and registers in the same namespace as the InfraEnv
resource on the management cluster.
Procedure
Update the parameter file to add the
rootfs_url
,network_adaptor
anddisk_type
values.Example parameter file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For the
coreos.live.rootfs_url
artifact, specify the matchingrootfs
artifact for thekernel
andinitramfs
that you are starting. Only HTTP and HTTPS protocols are supported. - 2
- For the
ip
parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE. - 3
- For installations on DASD-type disks, use
rd.dasd
to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, userd.zfcp=<adapter>,<wwpn>,<lun>
to specify the FCP disk where RHCOS is to be installed. - 4
- Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets.
Move
initrd
, kernel images, and the parameter file to the guest VM by running the following commands:vmur pun -r -u -N kernel.img $INSTALLERKERNELLOCATION/<image name>
vmur pun -r -u -N kernel.img $INSTALLERKERNELLOCATION/<image name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vmur pun -r -u -N generic.parm $PARMFILELOCATION/paramfilename
vmur pun -r -u -N generic.parm $PARMFILELOCATION/paramfilename
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vmur pun -r -u -N initrd.img $INSTALLERINITRAMFSLOCATION/<image name>
vmur pun -r -u -N initrd.img $INSTALLERINITRAMFSLOCATION/<image name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command from the guest VM console:
cp ipl c
cp ipl c
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list the agents and their properties, enter the following command:
oc -n <hosted_control_plane_namespace> get agents
$ oc -n <hosted_control_plane_namespace> get agents
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a auto-assign
NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a auto-assign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to approve the agent.
oc -n <hosted_control_plane_namespace> patch agent \ 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d -p \ '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-zvm-0.hostedn.example.com"}}' \ --type merge
$ oc -n <hosted_control_plane_namespace> patch agent \ 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d -p \ '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-zvm-0.hostedn.example.com"}}' \
1 --type merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optionally, you can set the agent ID
<installation_disk_id>
and<hostname>
in the specification.
Run the following command to verify that the agents are approved:
oc -n <hosted_control_plane_namespace> get agents
$ oc -n <hosted_control_plane_namespace> get agents
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign
NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.7. Scaling the NodePool object for a hosted cluster on IBM Z Copiar o linkLink copiado para a área de transferência!
The NodePool
object is created when you create a hosted cluster. By scaling the NodePool
object, you can add more compute nodes to the hosted control plane.
When you scale up a node pool, a machine is created. The Cluster API provider finds an Agent that is approved, is passing validations, is not currently in use, and meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions.
When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you reuse the clusters, you must boot the clusters by using the PXE image to update the number of nodes.
Procedure
Run the following command to scale the
NodePool
object to two nodes:oc -n <clusters_namespace> scale nodepool <nodepool_name> --replicas 2
$ oc -n <clusters_namespace> scale nodepool <nodepool_name> --replicas 2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through the transition phases in the following order:
-
binding
-
discovering
-
insufficient
-
installing
-
installing-in-progress
-
added-to-existing-cluster
-
Run the following command to see the status of a specific scaled agent:
oc -n <hosted_control_plane_namespace> get agent -o \ jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} \ Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'
$ oc -n <hosted_control_plane_namespace> get agent -o \ jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} \ Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient
BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to see the transition phases:
oc -n <hosted_control_plane_namespace> get agent
$ oc -n <hosted_control_plane_namespace> get agent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d hosted-forwarder true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hosted-forwarder true auto-assign
NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d hosted-forwarder true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hosted-forwarder true auto-assign
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to generate the
kubeconfig
file to access the hosted cluster:hcp create kubeconfig \ --namespace <clusters_namespace> \ --name <hosted_cluster_namespace> > <hosted_cluster_name>.kubeconfig
$ hcp create kubeconfig \ --namespace <clusters_namespace> \ --name <hosted_cluster_namespace> > <hosted_cluster_name>.kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the agents reach the
added-to-existing-cluster
state, verify that you can see the OpenShift Container Platform nodes by entering the following command:oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION worker-zvm-0.hostedn.example.com Ready worker 5m41s v1.24.0+3882f8f worker-zvm-1.hostedn.example.com Ready worker 6m3s v1.24.0+3882f8f
NAME STATUS ROLES AGE VERSION worker-zvm-0.hostedn.example.com Ready worker 5m41s v1.24.0+3882f8f worker-zvm-1.hostedn.example.com Ready worker 6m3s v1.24.0+3882f8f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Cluster Operators start to reconcile by adding workloads to the nodes.
Enter the following command to verify that two machines were created when you scaled up the
NodePool
object:oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io
$ oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hosted-forwarder-79558597ff-5tbqp hosted-forwarder-crqq5 worker-zvm-0.hostedn.example.com agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d Running 41h 4.15.0 hosted-forwarder-79558597ff-lfjfk hosted-forwarder-crqq5 worker-zvm-1.hostedn.example.com agent://5e498cd3-542c-e54f-0c58-ed43e28b568a Running 41h 4.15.0
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hosted-forwarder-79558597ff-5tbqp hosted-forwarder-crqq5 worker-zvm-0.hostedn.example.com agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d Running 41h 4.15.0 hosted-forwarder-79558597ff-lfjfk hosted-forwarder-crqq5 worker-zvm-1.hostedn.example.com agent://5e498cd3-542c-e54f-0c58-ed43e28b568a Running 41h 4.15.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to check the cluster version:
oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.15.0-ec.2 True False 40h Cluster version is 4.15.0-ec.2
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.15.0-ec.2 True False 40h Cluster version is 4.15.0-ec.2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to check the cluster operator status:
oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For each component of your cluster, the output shows the following cluster operator statuses: NAME
, VERSION
, AVAILABLE
, PROGRESSING
, DEGRADED
, SINCE
, and MESSAGE
.
For an output example, see Initial Operator configuration.
4.6. Deploying hosted control planes on IBM Power Copiar o linkLink copiado para a área de transferência!
You can deploy hosted control planes by configuring a cluster to function as a hosting cluster. This configuration provides an efficient and scalable solution for managing many clusters. The hosting cluster is an OpenShift Container Platform cluster that hosts control planes. The hosting cluster is also known as the management cluster.
The management cluster is not the managed cluster. A managed cluster is a cluster that the hub cluster manages.
The multicluster engine Operator supports only the default local-cluster
, which is a managed hub cluster, and the hub cluster as the hosting cluster.
To provision hosted control planes on bare-metal infrastructure, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add compute nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service".
You must start each IBM Power host with a Discovery image that the central infrastructure management provides. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
When you create a hosted cluster with the Agent platform, HyperShift installs the Agent Cluster API provider in the hosted control plane namespace.
4.6.1. Prerequisites to configure hosted control planes on IBM Power Copiar o linkLink copiado para a área de transferência!
- The multicluster engine for Kubernetes Operator version 2.7 and later installed on an OpenShift Container Platform cluster. The multicluster engine Operator is automatically installed when you install Red Hat Advanced Cluster Management (RHACM). You can also install the multicluster engine Operator without RHACM as an Operator from the OpenShift Container Platform OperatorHub.
The multicluster engine Operator must have at least one managed OpenShift Container Platform cluster. The
local-cluster
managed hub cluster is automatically imported in the multicluster engine Operator version 2.7 and later. For more information aboutlocal-cluster
, see Advanced configuration in the RHACM documentation. You can check the status of your hub cluster by running the following command:oc get managedclusters local-cluster
$ oc get managedclusters local-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You need a hosting cluster with at least 3 compute nodes to run the HyperShift Operator.
- You need to enable the central infrastructure management service. For more information, see "Enabling the central infrastructure management service".
- You need to install the hosted control planes command-line interface. For more information, see "Installing the hosted control plane command-line interface".
The hosted control planes feature is enabled by default. If you disabled the feature and want to manually enable the feature, see "Manually enabling the hosted control planes feature". If you need to disable the feature, see "Disabling the hosted control planes feature".
4.6.2. IBM Power infrastructure requirements Copiar o linkLink copiado para a área de transferência!
The Agent platform does not create any infrastructure, but requires the following resources for infrastructure:
- Agents: An Agent represents a host that boots with a Discovery image and that you can provision as an OpenShift Container Platform node.
- DNS: The API and Ingress endpoints must be routable.
4.6.3. DNS configuration for hosted control planes on IBM Power Copiar o linkLink copiado para a área de transferência!
Clients outside the network can access the API server for the hosted cluster. A DNS entry must exist for the api.<hosted_cluster_name>.<basedomain>
entry that points to the destination where the API server is reachable.
The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that runs the hosted control plane.
The entry can also point to a deployed load balancer to redirect incoming traffic to the ingress pods.
See the following example of a DNS configuration:
cat /var/named/<example.krnl.es.zone>
$ cat /var/named/<example.krnl.es.zone>
Example output
- 1
- The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes.
For IBM Power, add IP addresses that correspond to the IP address of the agent.
Example configuration
compute-0 IN A 1xx.2x.2xx.1yy compute-1 IN A 1xx.2x.2xx.1yy
compute-0 IN A 1xx.2x.2xx.1yy
compute-1 IN A 1xx.2x.2xx.1yy
4.6.4. Creating a hosted cluster by using the CLI Copiar o linkLink copiado para a área de transferência!
On bare-metal infrastructure, you can create or import a hosted cluster. After you enable the Assisted Installer as an add-on to multicluster engine Operator and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. The Agent Cluster API provider connects a management cluster that hosts the control plane and a hosted cluster that consists of only the compute nodes.
Prerequisites
- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster. Otherwise, the multicluster engine Operator cannot manage the hosted cluster.
-
Do not use the word
clusters
as a hosted cluster name. - You cannot create a hosted cluster in the namespace of a multicluster engine Operator managed cluster.
- For best security and management practices, create a hosted cluster separate from other hosted clusters.
- Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).
-
By default when you use the
hcp create cluster agent
command, the command creates a hosted cluster with configured node ports. The preferred publishing strategy for hosted clusters on bare metal exposes services through a load balancer. If you create a hosted cluster by using the web console or by using Red Hat Advanced Cluster Management, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify theservicePublishingStrategy
information in theHostedCluster
custom resource. Ensure that you meet the requirements described in "Requirements for hosted control planes on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:
oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
$ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
$ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that you have added bare-metal nodes to a hardware inventory.
Procedure
Create a namespace by entering the following command:
oc create ns <hosted_cluster_namespace>
$ oc create ns <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<hosted_cluster_namespace>
with an identifier for your hosted cluster namespace. The HyperShift Operator creates the namespace. During the hosted cluster creation process on bare-metal infrastructure, a generated Cluster API provider role requires that the namespace already exists.Create the configuration file for your hosted cluster by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your hosted cluster, such as
example
. - 2
- Specify the path to your pull secret, such as
/user/name/pullsecret
. - 3
- Specify your hosted control plane namespace, such as
clusters-example
. Ensure that agents are available in this namespace by using theoc get agent -n <hosted_control_plane_namespace>
command. - 4
- Specify your base domain, such as
krnl.es
. - 5
- The
--api-server-address
flag defines the IP address that gets used for the Kubernetes API communication in the hosted cluster. If you do not set the--api-server-address
flag, you must log in to connect to the management cluster. - 6
- Specify the etcd storage class name, such as
lvm-storageclass
. - 7
- Specify the path to your SSH public key. The default file path is
~/.ssh/id_rsa.pub
. - 8
- Specify your hosted cluster namespace.
- 9
- Specify the availability policy for the hosted control plane components. Supported options are
SingleReplica
andHighlyAvailable
. The default value isHighlyAvailable
. - 10
- Specify the supported OpenShift Container Platform version that you want to use, such as
4.19.0-multi
. If you are using a disconnected environment, replace<ocp_release_image>
with the digest image. To extract the OpenShift Container Platform release image digest, see Extracting the OpenShift Container Platform release image digest. - 11
- Specify the node pool replica count, such as
3
. You must specify the replica count as0
or greater to create the same number of replicas. Otherwise, you do not create node pools. - 12
- After the
--ssh-key
flag, specify the path to the SSH key, such asuser/.ssh/id_rsa
.
Configure the service publishing strategy. By default, hosted clusters use the
NodePort
service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.-
If you are using the default
NodePort
strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal". For production environments, use the
LoadBalancer
strategy because this strategy provides certificate handling and automatic DNS resolution. The following example demonstrates changing the service publishingLoadBalancer
strategy in your hosted cluster configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify
LoadBalancer
as the API Server type. For all other services, specifyRoute
as the type.
-
If you are using the default
Apply the changes to the hosted cluster configuration file by entering the following command:
oc apply -f hosted_cluster_config.yaml
$ oc apply -f hosted_cluster_config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check for the creation of the hosted cluster, node pools, and pods by entering the following commands:
oc get hostedcluster \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
$ oc get hostedcluster \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get nodepool \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
$ oc get nodepool \ <hosted_cluster_namespace> -n \ <hosted_cluster_namespace> -o \ jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -n <hosted_cluster_namespace>
$ oc get pods -n <hosted_cluster_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Confirm that the hosted cluster is ready. The status of
Available: True
indicates the readiness of the cluster and the node pool status showsAllMachinesReady: True
. These statuses indicate the healthiness of all cluster Operators. Install MetalLB in the hosted cluster:
Extract the
kubeconfig
file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands:Copy to Clipboard Copied! Toggle word wrap Toggle overflow export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
$ export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the MetalLB Operator by creating the
install-metallb-operator.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the file by entering the following command:
oc apply -f install-metallb-operator.yaml
$ oc apply -f install-metallb-operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the MetalLB IP address pool by creating the
deploy-metallb-ipaddresspool.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by entering the following command:
oc apply -f deploy-metallb-ipaddresspool.yaml
$ oc apply -f deploy-metallb-ipaddresspool.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the installation of MetalLB by checking the Operator status, the IP address pool, and the
L2Advertisement
resource by entering the following commands:oc get pods -n metallb-system
$ oc get pods -n metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ipaddresspool -n metallb-system
$ oc get ipaddresspool -n metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get l2advertisement -n metallb-system
$ oc get l2advertisement -n metallb-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the load balancer for ingress:
Create the
ingress-loadbalancer.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration by entering the following command:
oc apply -f ingress-loadbalancer.yaml
$ oc apply -f ingress-loadbalancer.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the load balancer service works as expected by entering the following command:
oc get svc metallb-ingress -n openshift-ingress
$ oc get svc metallb-ingress -n openshift-ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metallb-ingress LoadBalancer 172.31.127.129 10.11.176.71 80:30961/TCP,443:32090/TCP 16h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metallb-ingress LoadBalancer 172.31.127.129 10.11.176.71 80:30961/TCP,443:32090/TCP 16h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the DNS to work with the load balancer:
-
Configure the DNS for the
apps
domain by pointing the*.apps.<hosted_cluster_namespace>.<base_domain>
wildcard DNS record to the load balancer IP address. Verify the DNS resolution by entering the following command:
nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
$ nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Server: 10.11.176.1 Address: 10.11.176.1#53 Name: console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com Address: 10.11.176.71
Server: 10.11.176.1 Address: 10.11.176.1#53 Name: console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com Address: 10.11.176.71
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Configure the DNS for the
Verification
Check the cluster Operators by entering the following command:
oc get clusteroperators
$ oc get clusteroperators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that all Operators show
AVAILABLE: True
,PROGRESSING: False
, andDEGRADED: False
.Check the nodes by entering the following command:
oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that each node has the
READY
status.Test access to the console by entering the following URL in a web browser:
https://console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain>
https://console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow