Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 5. Managing hosted control planes


5.1. Managing hosted control planes on AWS

When you use hosted control planes for OpenShift Container Platform on Amazon Web Services (AWS), the infrastructure requirements vary based on your setup.

5.1.1. Prerequisites to manage AWS infrastructure and IAM permissions

To configure hosted control planes for OpenShift Container Platform on Amazon Web Services (AWS), you must meet the following the infrastructure requirements:

  • You configured hosted control planes before you can create hosted clusters.
  • You created an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials.

5.1.1.1. Infrastructure requirements for AWS

When you use hosted control planes on Amazon Web Services (AWS), the infrastructure requirements fit in the following categories:

  • Prerequired and unmanaged infrastructure for the HyperShift Operator in an arbitrary AWS account
  • Prerequired and unmanaged infrastructure in a hosted cluster AWS account
  • Hosted control planes-managed infrastructure in a management AWS account
  • Hosted control planes-managed infrastructure in a hosted cluster AWS account
  • Kubernetes-managed infrastructure in a hosted cluster AWS account

Prerequired means that hosted control planes requires AWS infrastructure to properly work. Unmanaged means that no Operator or controller creates the infrastructure for you.

5.1.1.2. Unmanaged infrastructure for the HyperShift Operator in an AWS account

An arbitrary Amazon Web Services (AWS) account depends on the provider of the hosted control planes service.

In self-managed hosted control planes, the cluster service provider controls the AWS account. The cluster service provider is the administrator who hosts cluster control planes and is responsible for uptime. In managed hosted control planes, the AWS account belongs to Red Hat.

In a prerequired and unmanaged infrastructure for the HyperShift Operator, the following infrastructure requirements apply for a management cluster AWS account:

  • One S3 Bucket

    • OpenID Connect (OIDC)
  • Route 53 hosted zones

    • A domain to host private and public entries for hosted clusters

5.1.1.3. Unmanaged infrastructure requirements for a management AWS account

When your infrastructure is prerequired and unmanaged in a hosted cluster Amazon Web Services (AWS) account, the infrastructure requirements for all access modes are as follows:

  • One VPC
  • One DHCP Option
  • Two subnets

    • A private subnet that is an internal data plane subnet
    • A public subnet that enables access to the internet from the data plane
  • One internet gateway
  • One elastic IP
  • One NAT gateway
  • One security group (worker nodes)
  • Two route tables (one private and one public)
  • Two Route 53 hosted zones
  • Enough quota for the following items:

    • One Ingress service load balancer for public hosted clusters
    • One private link endpoint for private hosted clusters
Note

For private link networking to work, the endpoint zone in the hosted cluster AWS account must match the zone of the instance that is resolved by the service endpoint in the management cluster AWS account. In AWS, the zone names are aliases, such as us-east-2b, which do not necessarily map to the same zone in different accounts. As a result, for private link to work, the management cluster must have subnets or workers in all zones of its region.

5.1.1.4. Infrastructure requirements for a management AWS account

When your infrastructure is managed by hosted control planes in a management AWS account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination.

For accounts with public clusters, the infrastructure requirements are as follows:

  • Network load balancer: a load balancer Kube API server

    • Kubernetes creates a security group
  • Volumes

    • For etcd (one or three depending on high availability)
    • For OVN-Kube

For accounts with private clusters, the infrastructure requirements are as follows:

  • Network load balancer: a load balancer private router
  • Endpoint service (private link)

For accounts with public and private clusters, the infrastructure requirements are as follows:

  • Network load balancer: a load balancer public router
  • Network load balancer: a load balancer private router
  • Endpoint service (private link)
  • Volumes

    • For etcd (one or three depending on high availability)
    • For OVN-Kube

5.1.1.5. Infrastructure requirements for an AWS account in a hosted cluster

When your infrastructure is managed by hosted control planes in a hosted cluster Amazon Web Services (AWS) account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination.

For accounts with public clusters, the infrastructure requirements are as follows:

  • Node pools must have EC2 instances that have Role and RolePolicy defined.

For accounts with private clusters, the infrastructure requirements are as follows:

  • One private link endpoint for each availability zone
  • EC2 instances for node pools

For accounts with public and private clusters, the infrastructure requirements are as follows:

  • One private link endpoint for each availability zone
  • EC2 instances for node pools

5.1.1.6. Kubernetes-managed infrastructure in a hosted cluster AWS account

When Kubernetes manages your infrastructure in a hosted cluster Amazon Web Services (AWS) account, the infrastructure requirements are as follows:

  • A network load balancer for default Ingress
  • An S3 bucket for registry

5.1.2. Identity and Access Management (IAM) permissions

In the context of hosted control planes, the consumer is responsible to create the Amazon Resource Name (ARN) roles. The consumer is an automated process to generate the permissions files. The consumer might be the CLI or OpenShift Cluster Manager. Hosted control planes can enable granularity to honor the principle of least-privilege components, which means that every component uses its own role to operate or create Amazon Web Services (AWS) objects, and the roles are limited to what is required for the product to function normally.

The hosted cluster receives the ARN roles as input and the consumer creates an AWS permission configuration for each component. As a result, the component can authenticate through STS and preconfigured OIDC IDP.

The following roles are consumed by some of the components from hosted control planes that run on the control plane and operate on the data plane:

  • controlPlaneOperatorARN
  • imageRegistryARN
  • ingressARN
  • kubeCloudControllerARN
  • nodePoolManagementARN
  • storageARN
  • networkARN

The following example shows a reference to the IAM roles from the hosted cluster:

...
endpointAccess: Public
  region: us-east-2
  resourceTags:
  - key: kubernetes.io/cluster/example-cluster-bz4j5
    value: owned
rolesRef:
    controlPlaneOperatorARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-control-plane-operator
    imageRegistryARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-openshift-image-registry
    ingressARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-openshift-ingress
    kubeCloudControllerARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-cloud-controller
    networkARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-cloud-network-config-controller
    nodePoolManagementARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-node-pool
    storageARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-aws-ebs-csi-driver-controller
type: AWS
...

The roles that hosted control planes uses are shown in the following examples:

  • ingressARN

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "elasticloadbalancing:DescribeLoadBalancers",
                    "tag:GetResources",
                    "route53:ListHostedZones"
                ],
                "Resource": "\*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "route53:ChangeResourceRecordSets"
                ],
                "Resource": [
                    "arn:aws:route53:::PUBLIC_ZONE_ID",
                    "arn:aws:route53:::PRIVATE_ZONE_ID"
                ]
            }
        ]
    }
  • imageRegistryARN

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:CreateBucket",
                    "s3:DeleteBucket",
                    "s3:PutBucketTagging",
                    "s3:GetBucketTagging",
                    "s3:PutBucketPublicAccessBlock",
                    "s3:GetBucketPublicAccessBlock",
                    "s3:PutEncryptionConfiguration",
                    "s3:GetEncryptionConfiguration",
                    "s3:PutLifecycleConfiguration",
                    "s3:GetLifecycleConfiguration",
                    "s3:GetBucketLocation",
                    "s3:ListBucket",
                    "s3:GetObject",
                    "s3:PutObject",
                    "s3:DeleteObject",
                    "s3:ListBucketMultipartUploads",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": "\*"
            }
        ]
    }
  • storageARN

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:AttachVolume",
                    "ec2:CreateSnapshot",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:DeleteSnapshot",
                    "ec2:DeleteTags",
                    "ec2:DeleteVolume",
                    "ec2:DescribeInstances",
                    "ec2:DescribeSnapshots",
                    "ec2:DescribeTags",
                    "ec2:DescribeVolumes",
                    "ec2:DescribeVolumesModifications",
                    "ec2:DetachVolume",
                    "ec2:ModifyVolume"
                ],
                "Resource": "\*"
            }
        ]
    }
  • networkARN

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeInstances",
                    "ec2:DescribeInstanceStatus",
                    "ec2:DescribeInstanceTypes",
                    "ec2:UnassignPrivateIpAddresses",
                    "ec2:AssignPrivateIpAddresses",
                    "ec2:UnassignIpv6Addresses",
                    "ec2:AssignIpv6Addresses",
                    "ec2:DescribeSubnets",
                    "ec2:DescribeNetworkInterfaces"
                ],
                "Resource": "\*"
            }
        ]
    }
  • kubeCloudControllerARN

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": [
                    "ec2:DescribeInstances",
                    "ec2:DescribeImages",
                    "ec2:DescribeRegions",
                    "ec2:DescribeRouteTables",
                    "ec2:DescribeSecurityGroups",
                    "ec2:DescribeSubnets",
                    "ec2:DescribeVolumes",
                    "ec2:CreateSecurityGroup",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:ModifyInstanceAttribute",
                    "ec2:ModifyVolume",
                    "ec2:AttachVolume",
                    "ec2:AuthorizeSecurityGroupIngress",
                    "ec2:CreateRoute",
                    "ec2:DeleteRoute",
                    "ec2:DeleteSecurityGroup",
                    "ec2:DeleteVolume",
                    "ec2:DetachVolume",
                    "ec2:RevokeSecurityGroupIngress",
                    "ec2:DescribeVpcs",
                    "elasticloadbalancing:AddTags",
                    "elasticloadbalancing:AttachLoadBalancerToSubnets",
                    "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
                    "elasticloadbalancing:CreateLoadBalancer",
                    "elasticloadbalancing:CreateLoadBalancerPolicy",
                    "elasticloadbalancing:CreateLoadBalancerListeners",
                    "elasticloadbalancing:ConfigureHealthCheck",
                    "elasticloadbalancing:DeleteLoadBalancer",
                    "elasticloadbalancing:DeleteLoadBalancerListeners",
                    "elasticloadbalancing:DescribeLoadBalancers",
                    "elasticloadbalancing:DescribeLoadBalancerAttributes",
                    "elasticloadbalancing:DetachLoadBalancerFromSubnets",
                    "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                    "elasticloadbalancing:ModifyLoadBalancerAttributes",
                    "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                    "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
                    "elasticloadbalancing:AddTags",
                    "elasticloadbalancing:CreateListener",
                    "elasticloadbalancing:CreateTargetGroup",
                    "elasticloadbalancing:DeleteListener",
                    "elasticloadbalancing:DeleteTargetGroup",
                    "elasticloadbalancing:DescribeListeners",
                    "elasticloadbalancing:DescribeLoadBalancerPolicies",
                    "elasticloadbalancing:DescribeTargetGroups",
                    "elasticloadbalancing:DescribeTargetHealth",
                    "elasticloadbalancing:ModifyListener",
                    "elasticloadbalancing:ModifyTargetGroup",
                    "elasticloadbalancing:RegisterTargets",
                    "elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
                    "iam:CreateServiceLinkedRole",
                    "kms:DescribeKey"
                ],
                "Resource": [
                    "\*"
                ],
                "Effect": "Allow"
            }
        ]
    }
  • nodePoolManagementARN

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": [
                    "ec2:AllocateAddress",
                    "ec2:AssociateRouteTable",
                    "ec2:AttachInternetGateway",
                    "ec2:AuthorizeSecurityGroupIngress",
                    "ec2:CreateInternetGateway",
                    "ec2:CreateNatGateway",
                    "ec2:CreateRoute",
                    "ec2:CreateRouteTable",
                    "ec2:CreateSecurityGroup",
                    "ec2:CreateSubnet",
                    "ec2:CreateTags",
                    "ec2:DeleteInternetGateway",
                    "ec2:DeleteNatGateway",
                    "ec2:DeleteRouteTable",
                    "ec2:DeleteSecurityGroup",
                    "ec2:DeleteSubnet",
                    "ec2:DeleteTags",
                    "ec2:DescribeAccountAttributes",
                    "ec2:DescribeAddresses",
                    "ec2:DescribeAvailabilityZones",
                    "ec2:DescribeImages",
                    "ec2:DescribeInstances",
                    "ec2:DescribeInternetGateways",
                    "ec2:DescribeNatGateways",
                    "ec2:DescribeNetworkInterfaces",
                    "ec2:DescribeNetworkInterfaceAttribute",
                    "ec2:DescribeRouteTables",
                    "ec2:DescribeSecurityGroups",
                    "ec2:DescribeSubnets",
                    "ec2:DescribeVpcs",
                    "ec2:DescribeVpcAttribute",
                    "ec2:DescribeVolumes",
                    "ec2:DetachInternetGateway",
                    "ec2:DisassociateRouteTable",
                    "ec2:DisassociateAddress",
                    "ec2:ModifyInstanceAttribute",
                    "ec2:ModifyNetworkInterfaceAttribute",
                    "ec2:ModifySubnetAttribute",
                    "ec2:ReleaseAddress",
                    "ec2:RevokeSecurityGroupIngress",
                    "ec2:RunInstances",
                    "ec2:TerminateInstances",
                    "tag:GetResources",
                    "ec2:CreateLaunchTemplate",
                    "ec2:CreateLaunchTemplateVersion",
                    "ec2:DescribeLaunchTemplates",
                    "ec2:DescribeLaunchTemplateVersions",
                    "ec2:DeleteLaunchTemplate",
                    "ec2:DeleteLaunchTemplateVersions"
                ],
                "Resource": [
                    "\*"
                ],
                "Effect": "Allow"
            },
            {
                "Condition": {
                    "StringLike": {
                        "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
                    }
                },
                "Action": [
                    "iam:CreateServiceLinkedRole"
                ],
                "Resource": [
                    "arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing"
                ],
                "Effect": "Allow"
            },
            {
                "Action": [
                    "iam:PassRole"
                ],
                "Resource": [
                    "arn:*:iam::*:role/*-worker-role"
                ],
                "Effect": "Allow"
            }
        ]
    }
  • controlPlaneOperatorARN

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:CreateVpcEndpoint",
                    "ec2:DescribeVpcEndpoints",
                    "ec2:ModifyVpcEndpoint",
                    "ec2:DeleteVpcEndpoints",
                    "ec2:CreateTags",
                    "route53:ListHostedZones"
                ],
                "Resource": "\*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "route53:ChangeResourceRecordSets",
                    "route53:ListResourceRecordSets"
                ],
                "Resource": "arn:aws:route53:::%s"
            }
        ]
    }

5.1.3. Creating AWS infrastructure and IAM resources separate

By default, the hcp create cluster aws command creates cloud infrastructure with the hosted cluster and applies it. You can create the cloud infrastructure portion separately so that you can use the hcp create cluster aws command only to create the cluster, or render it to modify it before you apply it.

To create the cloud infrastructure portion separately, you need to create the Amazon Web Services (AWS) infrastructure, create the AWS Identity and Access (IAM) resources, and create the cluster.

5.1.3.1. Creating the AWS infrastructure separately

To create the Amazon Web Services (AWS) infrastructure, you need to create a Virtual Private Cloud (VPC) and other resources for your cluster. You can use the AWS console or an infrastructure automation and provisioning tool. For instructions to use the AWS console, see Create a VPC plus other VPC resources in the AWS Documentation.

The VPC must include private and public subnets and resources for external access, such as a network address translation (NAT) gateway and an internet gateway. In addition to the VPC, you need a private hosted zone for the ingress of your cluster. If you are creating clusters that use PrivateLink (Private or PublicAndPrivate access modes), you need an additional hosted zone for PrivateLink.

Create the AWS infrastructure for your hosted cluster by using the following example configuration:

---
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: null
  name: clusters
spec: {}
status: {}
---
apiVersion: v1
data:
  .dockerconfigjson: xxxxxxxxxxx
kind: Secret
metadata:
  creationTimestamp: null
  labels:
    hypershift.openshift.io/safe-to-delete-with-cluster: "true"
  name: <pull_secret_name> 1
  namespace: clusters
---
apiVersion: v1
data:
  key: xxxxxxxxxxxxxxxxx
kind: Secret
metadata:
  creationTimestamp: null
  labels:
    hypershift.openshift.io/safe-to-delete-with-cluster: "true"
  name: <etcd_encryption_key_name> 2
  namespace: clusters
type: Opaque
---
apiVersion: v1
data:
  id_rsa: xxxxxxxxx
  id_rsa.pub: xxxxxxxxx
kind: Secret
metadata:
  creationTimestamp: null
  labels:
    hypershift.openshift.io/safe-to-delete-with-cluster: "true"
  name: <ssh-key-name> 3
  namespace: clusters
---
apiVersion: hypershift.openshift.io/v1beta1
kind: HostedCluster
metadata:
  creationTimestamp: null
  name: <hosted_cluster_name> 4
  namespace: clusters
spec:
  autoscaling: {}
  configuration: {}
  controllerAvailabilityPolicy: SingleReplica
  dns:
    baseDomain: <dns_domain> 5
    privateZoneID: xxxxxxxx
    publicZoneID: xxxxxxxx
  etcd:
    managed:
      storage:
        persistentVolume:
          size: 8Gi
          storageClassName: gp3-csi
        type: PersistentVolume
    managementType: Managed
  fips: false
  infraID: <infra_id> 6
  issuerURL: <issuer_url> 7
  networking:
    clusterNetwork:
    - cidr: 10.132.0.0/14
    machineNetwork:
    - cidr: 10.0.0.0/16
    networkType: OVNKubernetes
    serviceNetwork:
    - cidr: 172.31.0.0/16
  olmCatalogPlacement: management
  platform:
    aws:
      cloudProviderConfig:
        subnet:
          id: <subnet_xxx> 8
        vpc: <vpc_xxx> 9
        zone: us-west-1b
      endpointAccess: Public
      multiArch: false
      region: us-west-1
      rolesRef:
        controlPlaneOperatorARN: arn:aws:iam::820196288204:role/<infra_id>-control-plane-operator
        imageRegistryARN: arn:aws:iam::820196288204:role/<infra_id>-openshift-image-registry
        ingressARN: arn:aws:iam::820196288204:role/<infra_id>-openshift-ingress
        kubeCloudControllerARN: arn:aws:iam::820196288204:role/<infra_id>-cloud-controller
        networkARN: arn:aws:iam::820196288204:role/<infra_id>-cloud-network-config-controller
        nodePoolManagementARN: arn:aws:iam::820196288204:role/<infra_id>-node-pool
        storageARN: arn:aws:iam::820196288204:role/<infra_id>-aws-ebs-csi-driver-controller
    type: AWS
  pullSecret:
    name: <pull_secret_name>
  release:
    image: quay.io/openshift-release-dev/ocp-release:4.16-x86_64
  secretEncryption:
    aescbc:
      activeKey:
        name: <etcd_encryption_key_name>
    type: aescbc
  services:
  - service: APIServer
    servicePublishingStrategy:
      type: LoadBalancer
  - service: OAuthServer
    servicePublishingStrategy:
      type: Route
  - service: Konnectivity
    servicePublishingStrategy:
      type: Route
  - service: Ignition
    servicePublishingStrategy:
      type: Route
  - service: OVNSbDb
    servicePublishingStrategy:
      type: Route
  sshKey:
    name: <ssh_key_name>
status:
  controlPlaneEndpoint:
    host: ""
    port: 0
---
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
  creationTimestamp: null
  name: <node_pool_name> 10
  namespace: clusters
spec:
  arch: amd64
  clusterName: <hosted_cluster_name>
  management:
    autoRepair: true
    upgradeType: Replace
  nodeDrainTimeout: 0s
  platform:
    aws:
      instanceProfile: <instance_profile_name> 11
      instanceType: m6i.xlarge
      rootVolume:
        size: 120
        type: gp3
      subnet:
        id: <subnet_xxx>
    type: AWS
  release:
    image: quay.io/openshift-release-dev/ocp-release:4.16-x86_64
  replicas: 2
status:
  replicas: 0
1
Replace <pull_secret_name> with the name of your pull secret.
2
Replace <etcd_encryption_key_name> with the name of your etcd encryption key.
3
Replace <ssh_key_name> with the name of your SSH key.
4
Replace <hosted_cluster_name> with the name of your hosted cluster.
5
Replace <dns_domain> with your base DNS domain, such as example.com.
6
Replace <infra_id> with the value that identifies the IAM resources that are associated with the hosted cluster.
7
Replace <issuer_url> with your issuer URL, which ends with your infra_id value. For example, https://example-hosted-us-west-1.s3.us-west-1.amazonaws.com/example-hosted-infra-id.
8
Replace <subnet_xxx> with your subnet ID. Both private and public subnets need to be tagged. For public subnets, use kubernetes.io/role/elb=1. For private subnets, use kubernetes.io/role/internal-elb=1.
9
Replace <vpc_xxx> with your VPC ID.
10
Replace <node_pool_name> with the name of your NodePool resource.
11
Replace <instance_profile_name> with the name of your AWS instance.

5.1.3.2. Creating the AWS IAM resources

In Amazon Web Services (AWS), you must create the following IAM resources:

5.1.3.3. Creating a hosted cluster separately

You can create a hosted cluster separately on Amazon Web Services (AWS).

To create a hosted cluster separately, enter the following command:

$ hcp create cluster aws \
    --infra-id <infra_id> \1
    --name <hosted_cluster_name> \2
    --sts-creds <path_to_sts_credential_file> \3
    --pull-secret <path_to_pull_secret> \4
    --generate-ssh \5
    --node-pool-replicas 3
    --role-arn <role_name> 6
1
Replace <infra_id> with the same ID that you specified in the create infra aws command. This value identifies the IAM resources that are associated with the hosted cluster.
2
Replace <hosted_cluster_name> with the name of your hosted cluster.
3
Replace <path_to_sts_credential_file> with the same name that you specified in the create infra aws command.
4
Replace <path_to_pull_secret> with the name of the file that contains a valid OpenShift Container Platform pull secret.
5
The --generate-ssh flag is optional, but is good to include in case you need to SSH to your workers. An SSH key is generated for you and is stored as a secret in the same namespace as the hosted cluster.
6
Replace <role_name> with the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole. Specify the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole. For more information about ARN roles, see "Identity and Access Management (IAM) permissions".

You can also add the --render flag to the command and redirect output to a file where you can edit the resources before you apply them to the cluster.

After you run the command, the following resources are applied to your cluster:

  • A namespace
  • A secret with your pull secret
  • A HostedCluster
  • A NodePool
  • Three AWS STS secrets for control plane components
  • One SSH key secret if you specified the --generate-ssh flag.

5.2. Managing hosted control planes on bare metal

After you deploy hosted control planes on bare metal, you can manage a hosted cluster by completing the following tasks.

5.2.1. Accessing the hosted cluster

You can access the hosted cluster by either getting the kubeconfig file and kubeadmin credential directly from resources, or by using the hcp command line interface to generate a kubeconfig file.

Prerequisites

To access the hosted cluster by getting the kubeconfig file and credentials directly from resources, you must be familiar with the access secrets for hosted clusters. The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs.

The secret name formats are as follows:

  • kubeconfig secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig. For example, clusters-hypershift-demo-admin-kubeconfig.
  • kubeadmin password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password. For example, clusters-hypershift-demo-kubeadmin-password.

The kubeconfig secret contains a Base64-encoded kubeconfig field, which you can decode and save into a file to use with the following command:

$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes

The kubeadmin password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster.

Procedure

  • To access the hosted cluster by using the hcp CLI to generate the kubeconfig file, take the following steps:

    1. Generate the kubeconfig file by entering the following command:

      $ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
    2. After you save the kubeconfig file, you can access the hosted cluster by entering the following example command:

      $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes

5.2.2. Scaling the NodePool object for a hosted cluster

You can scale up the NodePool object by adding nodes to your hosted cluster. When you scale a node pool, consider the following information:

  • When you scale a replica by the node pool, a machine is created. For every machine, the Cluster API provider finds and installs an Agent that meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions.
  • When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you can reuse the Agents, you must restart them by using the Discovery image.

Procedure

  1. Scale the NodePool object to two nodes:

    $ oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2

    The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through states in the following order:

    • binding
    • discovering
    • insufficient
    • installing
    • installing-in-progress
    • added-to-existing-cluster
  2. Enter the following command:

    $ oc -n <hosted_control_plane_namespace> get agent

    Example output

    NAME                                   CLUSTER         APPROVED   ROLE          STAGE
    4dac1ab2-7dd5-4894-a220-6a3473b67ee6   hypercluster1   true       auto-assign
    d9198891-39f4-4930-a679-65fb142b108b                   true       auto-assign
    da503cf1-a347-44f2-875c-4960ddb04091   hypercluster1   true       auto-assign

  3. Enter the following command:

    $ oc -n <hosted_control_plane_namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'

    Example output

    BMH: ocp-worker-2 Agent: 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 State: binding
    BMH: ocp-worker-0 Agent: d9198891-39f4-4930-a679-65fb142b108b State: known-unbound
    BMH: ocp-worker-1 Agent: da503cf1-a347-44f2-875c-4960ddb04091 State: insufficient

  4. Obtain the kubeconfig for your new hosted cluster by entering the extract command:

    $ oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=- > kubeconfig-<hosted_cluster_name>
  5. After the agents reach the added-to-existing-cluster state, verify that you can see the OpenShift Container Platform nodes in the hosted cluster by entering the following command:

    $ oc --kubeconfig kubeconfig-<hosted_cluster_name> get nodes

    Example output

    NAME           STATUS   ROLES    AGE     VERSION
    ocp-worker-1   Ready    worker   5m41s   v1.24.0+3882f8f
    ocp-worker-2   Ready    worker   6m3s    v1.24.0+3882f8f

    Cluster Operators start to reconcile by adding workloads to the nodes.

  6. Enter the following command to verify that two machines were created when you scaled up the NodePool object:

    $ oc -n <hosted_control_plane_namespace> get machines

    Example output

    NAME                            CLUSTER               NODENAME       PROVIDERID                                     PHASE     AGE   VERSION
    hypercluster1-c96b6f675-m5vch   hypercluster1-b2qhl   ocp-worker-1   agent://da503cf1-a347-44f2-875c-4960ddb04091   Running   15m   4.x.z
    hypercluster1-c96b6f675-tl42p   hypercluster1-b2qhl   ocp-worker-2   agent://4dac1ab2-7dd5-4894-a220-6a3473b67ee6   Running   15m   4.x.z

    The clusterversion reconcile process eventually reaches a point where only Ingress and Console cluster operators are missing.

  7. Enter the following command:

    $ oc --kubeconfig kubeconfig-<hosted_cluster_name> get clusterversion,co

    Example output

    NAME                                         VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
    clusterversion.config.openshift.io/version             False       True          40m     Unable to apply 4.x.z: the cluster operator console has not yet successfully rolled out
    
    NAME                                                                             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    clusteroperator.config.openshift.io/console                                      4.12z     False       False         False      11m     RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hypercluster1.domain.com): Get "https://console-openshift-console.apps.hypercluster1.domain.com": dial tcp 10.19.3.29:443: connect: connection refused
    clusteroperator.config.openshift.io/csi-snapshot-controller                      4.12z     True        False         False      10m
    clusteroperator.config.openshift.io/dns                                          4.12z     True        False         False      9m16s

5.2.2.1. Adding node pools

You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as an agent label selector.

Procedure

  1. To create a node pool, enter the following information:

    $ hcp create nodepool agent \
      --cluster-name <hosted_cluster_name> \1
      --name <nodepool_name> \2
      --node-count <worker_node_count> \3
      --agentLabelSelector size=medium 4
    1
    Replace <hosted_cluster_name> with your hosted cluster name.
    2
    Replace <nodepool_name> with the name of your node pool, for example, <hosted_cluster_name>-extra-cpu.
    3
    Replace <worker_node_count> with the worker node count, for example, 2.
    4
    The --agentLabelSelector flag is optional. The node pool uses agents with the size=medium label.
  2. Check the status of the node pool by listing nodepool resources in the clusters namespace:

    $ oc get nodepools --namespace clusters
  3. Extract the admin-kubeconfig secret by entering the following command:

    $ oc extract -n <hosted_control_plane_namespace> secret/admin-kubeconfig --to=./hostedcluster-secrets --confirm

    Example output

    hostedcluster-secrets/kubeconfig

  4. After some time, you can check the status of the node pool by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets get nodes

Verification

  • Verify that the number of available node pools match the number of expected node pools by entering this command:

    $ oc get nodepools --namespace clusters

5.2.2.2. Enabling node auto-scaling for the hosted cluster

When you need more capacity in your hosted cluster and spare agents are available, you can enable auto-scaling to install new worker nodes.

Procedure

  1. To enable auto-scaling, enter the following command:

    $ oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 5, "min": 2 }}]'
    Note

    In the example, the minimum number of nodes is 2, and the maximum is 5. The maximum number of nodes that you can add might be bound by your platform. For example, if you use the Agent platform, the maximum number of nodes is bound by the number of available agents.

  2. Create a workload that requires a new node.

    1. Create a YAML file that contains the workload configuration, by using the following example:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        creationTimestamp: null
        labels:
          app: reversewords
        name: reversewords
        namespace: default
      spec:
        replicas: 40
        selector:
          matchLabels:
            app: reversewords
        strategy: {}
        template:
          metadata:
            creationTimestamp: null
            labels:
              app: reversewords
          spec:
            containers:
            - image: quay.io/mavazque/reversewords:latest
              name: reversewords
              resources:
                requests:
                  memory: 2Gi
      status: {}
    2. Save the file as workload-config.yaml.
    3. Apply the YAML by entering the following command:

      $ oc apply -f workload-config.yaml
  3. Extract the admin-kubeconfig secret by entering the following command:

    $ oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=./hostedcluster-secrets --confirm

    Example output

    hostedcluster-secrets/kubeconfig

  4. You can check if new nodes are in the Ready status by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets get nodes
  5. To remove the node, delete the workload by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets -n <namespace> delete deployment <deployment_name>
  6. Wait for several minutes to pass without requiring the additional capacity. On the Agent platform, the agent is decommissioned and can be reused. You can confirm that the node was removed by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets get nodes
Note

For IBM Z agents, compute nodes are detached from the cluster only for IBM Z with KVM agents. For z/VM and LPAR, you must delete the compute nodes manually.

Agents can be reused only for IBM Z with KVM. For z/VM and LPAR, re-create the agents to use them as compute nodes.

5.2.2.3. Disabling node auto-scaling for the hosted cluster

To disable node auto-scaling, complete the following procedure.

Procedure

  • Enter the following command to disable node auto-scaling for the hosted cluster:

    $ oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[\{"op":"remove", "path": "/spec/autoScaling"}, \{"op": "add", "path": "/spec/replicas", "value": <specify_value_to_scale_replicas>]'

    The command removes "spec.autoScaling" from the YAML file, adds "spec.replicas", and sets "spec.replicas" to the integer value that you specify.

5.2.3. Handling ingress in a hosted cluster on bare metal

Every OpenShift Container Platform cluster has a default application Ingress Controller that typically has an external DNS record associated with it. For example, if you create a hosted cluster named example with the base domain krnl.es, you can expect the wildcard domain *.apps.example.krnl.es to be routable.

Procedure

To set up a load balancer and wildcard DNS record for the *.apps domain, perform the following actions on your guest cluster:

  1. Deploy MetalLB by creating a YAML file that contains the configuration for the MetalLB Operator:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: metallb
      labels:
        openshift.io/cluster-monitoring: "true"
      annotations:
        workload.openshift.io/allowed: management
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: metallb-operator-operatorgroup
      namespace: metallb
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: metallb-operator
      namespace: metallb
    spec:
      channel: "stable"
      name: metallb-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
  2. Save the file as metallb-operator-config.yaml.
  3. Enter the following command to apply the configuration:

    $ oc apply -f metallb-operator-config.yaml
  4. After the Operator is running, create the MetalLB instance:

    1. Create a YAML file that contains the configuration for the MetalLB instance:

      apiVersion: metallb.io/v1beta1
      kind: MetalLB
      metadata:
        name: metallb
        namespace: metallb
    2. Save the file as metallb-instance-config.yaml.
    3. Create the MetalLB instance by entering this command:

      $ oc apply -f metallb-instance-config.yaml
  5. Create an IPAddressPool resource with a single IP address. This IP address must be on the same subnet as the network that the cluster nodes use.

    1. Create a file, such as ipaddresspool.yaml, with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb
        name: <ip_address_pool_name> 1
      spec:
        addresses:
          - <ingress_ip>-<ingress_ip> 2
        autoAssign: false
      1
      Specify the IPAddressPool resource name.
      2
      Specify the IP address for your environment. For example, 192.168.122.23.
    2. Apply the configuration for the IP address pool by entering the following command:

      $ oc apply -f ipaddresspool.yaml
  6. Create a L2 advertisement.

    1. Create a file, such as l2advertisement.yaml, with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: <l2_advertisement_name> 1
        namespace: metallb
      spec:
        ipAddressPools:
         - <ip_address_pool_name> 2
      1
      Specify the L2Advertisement resource name.
      2
      Specify the IPAddressPool resource name.
    2. Apply the configuration by entering the following command:

      $ oc apply -f l2advertisement.yaml
  7. After creating a service of the LoadBalancer type, MetalLB adds an external IP address for the service.

    1. Configure a new load balancer service that routes ingress traffic to the ingress deployment by creating a YAML file named metallb-loadbalancer-service.yaml:

      kind: Service
      apiVersion: v1
      metadata:
        annotations:
          metallb.universe.tf/address-pool: ingress-public-ip
        name: metallb-ingress
        namespace: openshift-ingress
      spec:
        ports:
          - name: http
            protocol: TCP
            port: 80
            targetPort: 80
          - name: https
            protocol: TCP
            port: 443
            targetPort: 443
        selector:
          ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
        type: LoadBalancer
    2. Save the metallb-loadbalancer-service.yaml file.
    3. Enter the following command to apply the YAML configuration:

      $ oc apply -f metallb-loadbalancer-service.yaml
    4. Enter the following command to reach the OpenShift Container Platform console:

      $ curl -kI https://console-openshift-console.apps.example.krnl.es

      Example output

      HTTP/1.1 200 OK

    5. Check the clusterversion and clusteroperator values to verify that everything is running. Enter the following command:

      $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co

      Example output

      NAME                                         VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
      clusterversion.config.openshift.io/version   4.x.y      True        False        3m32s   Cluster version is 4.x.y
      
      NAME                                                                             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
      clusteroperator.config.openshift.io/console                                      4.x.y     True        False         False      3m50s
      clusteroperator.config.openshift.io/ingress                                      4.x.y     True        False         False      53m

      Replace <4.x.y> with the supported OpenShift Container Platform version that you want to use, for example, 4.17.0-multi.

5.2.4. Enabling machine health checks on bare metal

You can enable machine health checks on bare metal to repair and replace unhealthy managed cluster nodes automatically. You must have additional agent machines that are ready to install in the managed cluster.

Consider the following limitations before enabling machine health checks:

  • You cannot modify the MachineHealthCheck object.
  • Machine health checks replace nodes only when at least two nodes stay in the False or Unknown status for more than 8 minutes.

After you enable machine health checks for the managed cluster nodes, the MachineHealthCheck object is created in your hosted cluster.

Procedure

To enable machine health checks in your hosted cluster, modify the NodePool resource. Complete the following steps:

  1. Verify that the spec.nodeDrainTimeout value in your NodePool resource is greater than 0s. Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace and <nodepool_name> with the node pool name. Run the following command:

    $ oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep nodeDrainTimeout

    Example output

    nodeDrainTimeout: 30s

  2. If the spec.nodeDrainTimeout value is not greater than 0s, modify the value by running the following command:

    $ oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec":{"nodeDrainTimeout": "30m"}}' --type=merge
  3. Enable machine health checks by setting the spec.management.autoRepair field to true in the NodePool resource. Run the following command:

    $ oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec": {"management": {"autoRepair":true}}}' --type=merge
  4. Verify that the NodePool resource is updated with the autoRepair: true value by running the following command:

    $ oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair

5.2.5. Disabling machine health checks on bare metal

To disable machine health checks for the managed cluster nodes, modify the NodePool resource.

Procedure

  1. Disable machine health checks by setting the spec.management.autoRepair field to false in the NodePool resource. Run the following command:

    $ oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec": {"management": {"autoRepair":false}}}' --type=merge
  2. Verify that the NodePool resource is updated with the autoRepair: false value by running the following command:

    $ oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair

Additional resources

5.3. Managing hosted control planes on OpenShift Virtualization

After you deploy a hosted cluster on OpenShift Virtualization, you can manage the cluster by completing the following procedures.

5.3.1. Accessing the hosted cluster

You can access the hosted cluster by either getting the kubeconfig file and kubeadmin credential directly from resources, or by using the hcp command line interface to generate a kubeconfig file.

Prerequisites

To access the hosted cluster by getting the kubeconfig file and credentials directly from resources, you must be familiar with the access secrets for hosted clusters. The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs.

The secret name formats are as follows:

  • kubeconfig secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig (clusters-hypershift-demo-admin-kubeconfig)
  • kubeadmin password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password (clusters-hypershift-demo-kubeadmin-password)

The kubeconfig secret contains a Base64-encoded kubeconfig field, which you can decode and save into a file to use with the following command:

$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes

The kubeadmin password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster.

Procedure

  • To access the hosted cluster by using the hcp CLI to generate the kubeconfig file, take the following steps:

    1. Generate the kubeconfig file by entering the following command:

      $ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
    2. After you save the kubeconfig file, you can access the hosted cluster by entering the following example command:

      $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes

5.3.2. Configuring storage for hosted control planes on OpenShift Virtualization

If you do not provide any advanced storage configuration, the default storage class is used for the KubeVirt virtual machine (VM) images, the KubeVirt Container Storage Interface (CSI) mapping, and the etcd volumes.

The following table lists the capabilities that the infrastructure must provide to support persistent storage in a hosted cluster:

Table 5.1. Persistent storage modes in a hosted cluster
Infrastructure CSI providerHosted cluster CSI providerHosted cluster capabilitiesNotes

Any RWX Block CSI provider

kubevirt-csi

Basic: RWO Block and File, RWX Block and Snapshot

Recommended

Any RWX Block CSI provider

Red Hat OpenShift Data Foundation external mode

Red Hat OpenShift Data Foundation feature set

 

Any RWX Block CSI provider

Red Hat OpenShift Data Foundation internal mode

Red Hat OpenShift Data Foundation feature set

Do not use

5.3.2.1. Mapping KubeVirt CSI storage classes

KubeVirt CSI supports mapping a infrastructure storage class that is capable of ReadWriteMany (RWX) access. You can map the infrastructure storage class to hosted storage class during cluster creation.

Procedure

  • To map the infrastructure storage class to the hosted storage class, use the --infra-storage-class-mapping argument by running the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ 1
      --node-pool-replicas <worker_node_count> \ 2
      --pull-secret <path_to_pull_secret> \ 3
      --memory <memory> \ 4
      --cores <cpu> \ 5
      --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ 6
    1
    Specify the name of your hosted cluster, for instance, example.
    2
    Specify the worker count, for example, 2.
    3
    Specify the path to your pull secret, for example, /user/name/pullsecret.
    4
    Specify a value for memory, for example, 8Gi.
    5
    Specify a value for CPU, for example, 2.
    6
    Replace <infrastructure_storage_class> with the infrastructure storage class name and <hosted_storage_class> with the hosted cluster storage class name. You can use the --infra-storage-class-mapping argument multiple times within the hcp create cluster command.

After you create the hosted cluster, the infrastructure storage class is visible within the hosted cluster. When you create a Persistent Volume Claim (PVC) within the hosted cluster that uses one of those storage classes, KubeVirt CSI provisions that volume by using the infrastructure storage class mapping that you configured during cluster creation.

Note

KubeVirt CSI supports mapping only an infrastructure storage class that is capable of RWX access.

The following table shows how volume and access mode capabilities map to KubeVirt CSI storage classes:

Table 5.2. Mapping KubeVirt CSI storage classes to access and volume modes
Infrastructure CSI capabilityHosted cluster CSI capabilityVM live migration supportNotes

RWX: Block or Filesystem

ReadWriteOnce (RWO) Block or Filesystem RWX Block only

Supported

Use Block mode because Filesystem volume mode results in degraded hosted Block mode performance. RWX Block volume mode is supported only when the hosted cluster is OpenShift Container Platform 4.16 or later.

RWO Block storage

RWO Block storage or Filesystem

Not supported

Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs.

RWO FileSystem

RWO Block or Filesystem

Not supported

Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs. Use of the infrastructure Filesystem volume mode results in degraded hosted Block mode performance.

5.3.2.2. Mapping a single KubeVirt CSI volume snapshot class

You can expose your infrastructure volume snapshot class to the hosted cluster by using KubeVirt CSI.

Procedure

  • To map your volume snapshot class to the hosted cluster, use the --infra-volumesnapshot-class-mapping argument when creating a hosted cluster. Run the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ 1
      --node-pool-replicas <worker_node_count> \ 2
      --pull-secret <path_to_pull_secret> \ 3
      --memory <memory> \ 4
      --cores <cpu> \ 5
      --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ 6
      --infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class> 7
    1
    Specify the name of your hosted cluster, for instance, example.
    2
    Specify the worker count, for example, 2.
    3
    Specify the path to your pull secret, for example, /user/name/pullsecret.
    4
    Specify a value for memory, for example, 8Gi.
    5
    Specify a value for CPU, for example, 2.
    6
    Replace <infrastructure_storage_class> with the storage class present in the infrastructure cluster. Replace <hosted_storage_class> with the storage class present in the hosted cluster.
    7
    Replace <infrastructure_volume_snapshot_class> with the volume snapshot class present in the infrastructure cluster. Replace <hosted_volume_snapshot_class> with the volume snapshot class present in the hosted cluster.
    Note

    If you do not use the --infra-storage-class-mapping and --infra-volumesnapshot-class-mapping arguments, a hosted cluster is created with the default storage class and the volume snapshot class. Therefore, you must set the default storage class and the volume snapshot class in the infrastructure cluster.

5.3.2.3. Mapping multiple KubeVirt CSI volume snapshot classes

You can map multiple volume snapshot classes to the hosted cluster by assigning them to a specific group. The infrastructure storage class and the volume snapshot class are compatible with each other only if they belong to a same group.

Procedure

  • To map multiple volume snapshot classes to the hosted cluster, use the group option when creating a hosted cluster. Run the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ 1
      --node-pool-replicas <worker_node_count> \ 2
      --pull-secret <path_to_pull_secret> \ 3
      --memory <memory> \ 4
      --cores <cpu> \ 5
      --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \ 6
      --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \
      --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \
      --infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name> \ 7
      --infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name>
    1
    Specify the name of your hosted cluster, for instance, example.
    2
    Specify the worker count, for example, 2.
    3
    Specify the path to your pull secret, for example, /user/name/pullsecret.
    4
    Specify a value for memory, for example, 8Gi.
    5
    Specify a value for CPU, for example, 2.
    6
    Replace <infrastructure_storage_class> with the storage class present in the infrastructure cluster. Replace <hosted_storage_class> with the storage class present in the hosted cluster. Replace <group_name> with the group name. For example, infra-storage-class-mygroup/hosted-storage-class-mygroup,group=mygroup and infra-storage-class-mymap/hosted-storage-class-mymap,group=mymap.
    7
    Replace <infrastructure_volume_snapshot_class> with the volume snapshot class present in the infrastructure cluster. Replace <hosted_volume_snapshot_class> with the volume snapshot class present in the hosted cluster. For example, infra-vol-snap-mygroup/hosted-vol-snap-mygroup,group=mygroup and infra-vol-snap-mymap/hosted-vol-snap-mymap,group=mymap.

5.3.2.4. Configuring KubeVirt VM root volume

At cluster creation time, you can configure the storage class that is used to host the KubeVirt VM root volumes by using the --root-volume-storage-class argument.

Procedure

  • To set a custom storage class and volume size for KubeVirt VMs, run the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ 1
      --node-pool-replicas <worker_node_count> \ 2
      --pull-secret <path_to_pull_secret> \ 3
      --memory <memory> \ 4
      --cores <cpu> \ 5
      --root-volume-storage-class <root_volume_storage_class> \ 6
      --root-volume-size <volume_size> 7
    1
    Specify the name of your hosted cluster, for instance, example.
    2
    Specify the worker count, for example, 2.
    3
    Specify the path to your pull secret, for example, /user/name/pullsecret.
    4
    Specify a value for memory, for example, 8Gi.
    5
    Specify a value for CPU, for example, 2.
    6
    Specify a name of the storage class to host the KubeVirt VM root volumes, for example, ocs-storagecluster-ceph-rbd.
    7
    Specify the volume size, for example, 64.

    As a result, you get a hosted cluster created with VMs hosted on PVCs.

5.3.2.5. Enabling KubeVirt VM image caching

You can use KubeVirt VM image caching to optimize both cluster startup time and storage usage. KubeVirt VM image caching supports the use of a storage class that is capable of smart cloning and the ReadWriteMany access mode. For more information about smart cloning, see Cloning a data volume using smart-cloning.

Image caching works as follows:

  1. The VM image is imported to a PVC that is associated with the hosted cluster.
  2. A unique clone of that PVC is created for every KubeVirt VM that is added as a worker node to the cluster.

Image caching reduces VM startup time by requiring only a single image import. It can further reduce overall cluster storage usage when the storage class supports copy-on-write cloning.

Procedure

  • To enable image caching, during cluster creation, use the --root-volume-cache-strategy=PVC argument by running the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ 1
      --node-pool-replicas <worker_node_count> \ 2
      --pull-secret <path_to_pull_secret> \ 3
      --memory <memory> \ 4
      --cores <cpu> \ 5
      --root-volume-cache-strategy=PVC 6
    1
    Specify the name of your hosted cluster, for instance, example.
    2
    Specify the worker count, for example, 2.
    3
    Specify the path to your pull secret, for example, /user/name/pullsecret.
    4
    Specify a value for memory, for example, 8Gi.
    5
    Specify a value for CPU, for example, 2.
    6
    Specify a strategy for image caching, for example, PVC.

5.3.2.6. KubeVirt CSI storage security and isolation

KubeVirt Container Storage Interface (CSI) extends the storage capabilities of the underlying infrastructure cluster to hosted clusters. The CSI driver ensures secure and isolated access to the infrastructure storage classes and hosted clusters by using the following security constraints:

  • The storage of a hosted cluster is isolated from the other hosted clusters.
  • Worker nodes in a hosted cluster do not have a direct API access to the infrastructure cluster. The hosted cluster can provision storage on the infrastructure cluster only through the controlled KubeVirt CSI interface.
  • The hosted cluster does not have access to the KubeVirt CSI cluster controller. As a result, the hosted cluster cannot access arbitrary storage volumes on the infrastructure cluster that are not associated with the hosted cluster. The KubeVirt CSI cluster controller runs in a pod in the hosted control plane namespace.
  • Role-based access control (RBAC) of the KubeVirt CSI cluster controller limits the persistent volume claim (PVC) access to only the hosted control plane namespace. Therefore, KubeVirt CSI components cannot access storage from the other namespaces.

5.3.2.7. Configuring etcd storage

At cluster creation time, you can configure the storage class that is used to host etcd data by using the --etcd-storage-class argument.

Procedure

  • To configure a storage class for etcd, run the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ 1
      --node-pool-replicas <worker_node_count> \ 2
      --pull-secret <path_to_pull_secret> \ 3
      --memory <memory> \ 4
      --cores <cpu> \ 5
      --etcd-storage-class=<etcd_storage_class_name> 6
    1
    Specify the name of your hosted cluster, for instance, example.
    2
    Specify the worker count, for example, 2.
    3
    Specify the path to your pull secret, for example, /user/name/pullsecret.
    4
    Specify a value for memory, for example, 8Gi.
    5
    Specify a value for CPU, for example, 2.
    6
    Specify the etcd storage class name, for example, lvm-storageclass. If you do not provide an --etcd-storage-class argument, the default storage class is used.

5.3.3. Attaching NVIDIA GPU devices by using the hcp CLI

You can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools by using the hcp command-line interface (CLI) in a hosted cluster on OpenShift Virtualization.

Important

Attaching NVIDIA GPU devices to node pools is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

Procedure

  • You can attach the GPU device to node pools during cluster creation by running the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \1
      --node-pool-replicas <worker_node_count> \2
      --pull-secret <path_to_pull_secret> \3
      --memory <memory> \4
      --cores <cpu> \5
      --host-device-name="<gpu_device_name>,count:<value>" 6
    1
    Specify the name of your hosted cluster, for instance, example.
    2
    Specify the worker count, for example, 3.
    3
    Specify the path to your pull secret, for example, /user/name/pullsecret.
    4
    Specify a value for memory, for example, 16Gi.
    5
    Specify a value for CPU, for example, 2.
    6
    Specify the GPU device name and the count, for example, --host-device-name="nvidia-a100,count:2". The --host-device-name argument takes the name of the GPU device from the infrastructure node and an optional count that represents the number of GPU devices you want to attach to each virtual machine (VM) in node pools. The default count is 1. For example, if you attach 2 GPU devices to 3 node pool replicas, all 3 VMs in the node pool are attached to the 2 GPU devices.
    Tip

    You can use the --host-device-name argument multiple times to attach multiple devices of different types.

5.3.4. Attaching NVIDIA GPU devices by using the NodePool resource

You can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools by configuring the nodepool.spec.platform.kubevirt.hostDevices field in the NodePool resource.

Important

Attaching NVIDIA GPU devices to node pools is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Procedure

  • Attach one or more GPU devices to node pools:

    • To attach a single GPU device, configure the NodePool resource by using the following example configuration:

      apiVersion: hypershift.openshift.io/v1beta1
      kind: NodePool
      metadata:
        name: <hosted_cluster_name> 1
        namespace: <hosted_cluster_namespace> 2
      spec:
        arch: amd64
        clusterName: <hosted_cluster_name>
        management:
          autoRepair: false
          upgradeType: Replace
        nodeDrainTimeout: 0s
        nodeVolumeDetachTimeout: 0s
        platform:
          kubevirt:
            attachDefaultNetwork: true
            compute:
              cores: <cpu> 3
              memory: <memory> 4
            hostDevices: 5
            - count: <count> 6
              deviceName: <gpu_device_name> 7
            networkInterfaceMultiqueue: Enable
            rootVolume:
              persistent:
                size: 32Gi
              type: Persistent
          type: KubeVirt
        replicas: <worker_node_count> 8
      1
      Specify the name of your hosted cluster, for instance, example.
      2
      Specify the name of the hosted cluster namespace, for example, clusters.
      3
      Specify a value for CPU, for example, 2.
      4
      Specify a value for memory, for example, 16Gi.
      5
      The hostDevices field defines a list of different types of GPU devices that you can attach to node pools.
      6
      Specify the number of GPU devices you want to attach to each virtual machine (VM) in node pools. For example, if you attach 2 GPU devices to 3 node pool replicas, all 3 VMs in the node pool are attached to the 2 GPU devices. The default count is 1.
      7
      Specify the GPU device name, for example,nvidia-a100.
      8
      Specify the worker count, for example, 3.
    • To attach multiple GPU devices, configure the NodePool resource by using the following example configuration:

      apiVersion: hypershift.openshift.io/v1beta1
      kind: NodePool
      metadata:
        name: <hosted_cluster_name>
        namespace: <hosted_cluster_namespace>
      spec:
        arch: amd64
        clusterName: <hosted_cluster_name>
        management:
          autoRepair: false
          upgradeType: Replace
        nodeDrainTimeout: 0s
        nodeVolumeDetachTimeout: 0s
        platform:
          kubevirt:
            attachDefaultNetwork: true
            compute:
              cores: <cpu>
              memory: <memory>
            hostDevices:
            - count: <count>
              deviceName: <gpu_device_name>
            - count: <count>
              deviceName: <gpu_device_name>
            - count: <count>
              deviceName: <gpu_device_name>
            - count: <count>
              deviceName: <gpu_device_name>
            networkInterfaceMultiqueue: Enable
            rootVolume:
              persistent:
                size: 32Gi
              type: Persistent
          type: KubeVirt
        replicas: <worker_node_count>

5.4. Managing hosted control planes on non-bare metal agent machines

After you deploy hosted control planes on non-bare metal agent machines, you can manage a hosted cluster by completing the following tasks.

5.4.1. Accessing the hosted cluster

You can access the hosted cluster by either getting the kubeconfig file and kubeadmin credential directly from resources, or by using the hcp command line interface to generate a kubeconfig file.

Prerequisites

To access the hosted cluster by getting the kubeconfig file and credentials directly from resources, you must be familiar with the access secrets for hosted clusters. The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs.

The secret name formats are as follows:

  • kubeconfig secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig. For example, clusters-hypershift-demo-admin-kubeconfig.
  • kubeadmin password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password. For example, clusters-hypershift-demo-kubeadmin-password.

The kubeconfig secret contains a Base64-encoded kubeconfig field, which you can decode and save into a file to use with the following command:

$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes

The kubeadmin password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster.

Procedure

  • To access the hosted cluster by using the hcp CLI to generate the kubeconfig file, take the following steps:

    1. Generate the kubeconfig file by entering the following command:

      $ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
    2. After you save the kubeconfig file, you can access the hosted cluster by entering the following example command:

      $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes

5.4.2. Scaling the NodePool object for a hosted cluster

You can scale up the NodePool object by adding nodes to your hosted cluster. When you scale a node pool, consider the following information:

  • When you scale a replica by the node pool, a machine is created. For every machine, the Cluster API provider finds and installs an Agent that meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions.
  • When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you can reuse the Agents, you must restart them by using the Discovery image.

Procedure

  1. Scale the NodePool object to two nodes:

    $ oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2

    The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through states in the following order:

    • binding
    • discovering
    • insufficient
    • installing
    • installing-in-progress
    • added-to-existing-cluster
  2. Enter the following command:

    $ oc -n <hosted_control_plane_namespace> get agent

    Example output

    NAME                                   CLUSTER         APPROVED   ROLE          STAGE
    4dac1ab2-7dd5-4894-a220-6a3473b67ee6   hypercluster1   true       auto-assign
    d9198891-39f4-4930-a679-65fb142b108b                   true       auto-assign
    da503cf1-a347-44f2-875c-4960ddb04091   hypercluster1   true       auto-assign

  3. Enter the following command:

    $ oc -n <hosted_control_plane_namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'

    Example output

    BMH: ocp-worker-2 Agent: 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 State: binding
    BMH: ocp-worker-0 Agent: d9198891-39f4-4930-a679-65fb142b108b State: known-unbound
    BMH: ocp-worker-1 Agent: da503cf1-a347-44f2-875c-4960ddb04091 State: insufficient

  4. Obtain the kubeconfig for your new hosted cluster by entering the extract command:

    $ oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=- > kubeconfig-<hosted_cluster_name>
  5. After the agents reach the added-to-existing-cluster state, verify that you can see the OpenShift Container Platform nodes in the hosted cluster by entering the following command:

    $ oc --kubeconfig kubeconfig-<hosted_cluster_name> get nodes

    Example output

    NAME           STATUS   ROLES    AGE     VERSION
    ocp-worker-1   Ready    worker   5m41s   v1.24.0+3882f8f
    ocp-worker-2   Ready    worker   6m3s    v1.24.0+3882f8f

    Cluster Operators start to reconcile by adding workloads to the nodes.

  6. Enter the following command to verify that two machines were created when you scaled up the NodePool object:

    $ oc -n <hosted_control_plane_namespace> get machines

    Example output

    NAME                            CLUSTER               NODENAME       PROVIDERID                                     PHASE     AGE   VERSION
    hypercluster1-c96b6f675-m5vch   hypercluster1-b2qhl   ocp-worker-1   agent://da503cf1-a347-44f2-875c-4960ddb04091   Running   15m   4.x.z
    hypercluster1-c96b6f675-tl42p   hypercluster1-b2qhl   ocp-worker-2   agent://4dac1ab2-7dd5-4894-a220-6a3473b67ee6   Running   15m   4.x.z

    The clusterversion reconcile process eventually reaches a point where only Ingress and Console cluster operators are missing.

  7. Enter the following command:

    $ oc --kubeconfig kubeconfig-<hosted_cluster_name> get clusterversion,co

    Example output

    NAME                                         VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
    clusterversion.config.openshift.io/version             False       True          40m     Unable to apply 4.x.z: the cluster operator console has not yet successfully rolled out
    
    NAME                                                                             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    clusteroperator.config.openshift.io/console                                      4.12z     False       False         False      11m     RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hypercluster1.domain.com): Get "https://console-openshift-console.apps.hypercluster1.domain.com": dial tcp 10.19.3.29:443: connect: connection refused
    clusteroperator.config.openshift.io/csi-snapshot-controller                      4.12z     True        False         False      10m
    clusteroperator.config.openshift.io/dns                                          4.12z     True        False         False      9m16s

5.4.2.1. Adding node pools

You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as an agent label selector.

Procedure

  1. To create a node pool, enter the following information:

    $ hcp create nodepool agent \
      --cluster-name <hosted_cluster_name> \1
      --name <nodepool_name> \2
      --node-count <worker_node_count> \3
      --agentLabelSelector size=medium 4
    1
    Replace <hosted_cluster_name> with your hosted cluster name.
    2
    Replace <nodepool_name> with the name of your node pool, for example, <hosted_cluster_name>-extra-cpu.
    3
    Replace <worker_node_count> with the worker node count, for example, 2.
    4
    The --agentLabelSelector flag is optional. The node pool uses agents with the size=medium label.
  2. Check the status of the node pool by listing nodepool resources in the clusters namespace:

    $ oc get nodepools --namespace clusters
  3. Extract the admin-kubeconfig secret by entering the following command:

    $ oc extract -n <hosted_control_plane_namespace> secret/admin-kubeconfig --to=./hostedcluster-secrets --confirm

    Example output

    hostedcluster-secrets/kubeconfig

  4. After some time, you can check the status of the node pool by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets get nodes

Verification

  • Verify that the number of available node pools match the number of expected node pools by entering this command:

    $ oc get nodepools --namespace clusters

5.4.2.2. Enabling node auto-scaling for the hosted cluster

When you need more capacity in your hosted cluster and spare agents are available, you can enable auto-scaling to install new worker nodes.

Procedure

  1. To enable auto-scaling, enter the following command:

    $ oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 5, "min": 2 }}]'
    Note

    In the example, the minimum number of nodes is 2, and the maximum is 5. The maximum number of nodes that you can add might be bound by your platform. For example, if you use the Agent platform, the maximum number of nodes is bound by the number of available agents.

  2. Create a workload that requires a new node.

    1. Create a YAML file that contains the workload configuration, by using the following example:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        creationTimestamp: null
        labels:
          app: reversewords
        name: reversewords
        namespace: default
      spec:
        replicas: 40
        selector:
          matchLabels:
            app: reversewords
        strategy: {}
        template:
          metadata:
            creationTimestamp: null
            labels:
              app: reversewords
          spec:
            containers:
            - image: quay.io/mavazque/reversewords:latest
              name: reversewords
              resources:
                requests:
                  memory: 2Gi
      status: {}
    2. Save the file as workload-config.yaml.
    3. Apply the YAML by entering the following command:

      $ oc apply -f workload-config.yaml
  3. Extract the admin-kubeconfig secret by entering the following command:

    $ oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=./hostedcluster-secrets --confirm

    Example output

    hostedcluster-secrets/kubeconfig

  4. You can check if new nodes are in the Ready status by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets get nodes
  5. To remove the node, delete the workload by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets -n <namespace> delete deployment <deployment_name>
  6. Wait for several minutes to pass without requiring the additional capacity. On the Agent platform, the agent is decommissioned and can be reused. You can confirm that the node was removed by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets get nodes
Note

For IBM Z agents, compute nodes are detached from the cluster only for IBM Z with KVM agents. For z/VM and LPAR, you must delete the compute nodes manually.

Agents can be reused only for IBM Z with KVM. For z/VM and LPAR, re-create the agents to use them as compute nodes.

5.4.2.3. Disabling node auto-scaling for the hosted cluster

To disable node auto-scaling, complete the following procedure.

Procedure

  • Enter the following command to disable node auto-scaling for the hosted cluster:

    $ oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[\{"op":"remove", "path": "/spec/autoScaling"}, \{"op": "add", "path": "/spec/replicas", "value": <specify_value_to_scale_replicas>]'

    The command removes "spec.autoScaling" from the YAML file, adds "spec.replicas", and sets "spec.replicas" to the integer value that you specify.

5.4.3. Handling ingress in a hosted cluster on bare metal

Every OpenShift Container Platform cluster has a default application Ingress Controller that typically has an external DNS record associated with it. For example, if you create a hosted cluster named example with the base domain krnl.es, you can expect the wildcard domain *.apps.example.krnl.es to be routable.

Procedure

To set up a load balancer and wildcard DNS record for the *.apps domain, perform the following actions on your guest cluster:

  1. Deploy MetalLB by creating a YAML file that contains the configuration for the MetalLB Operator:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: metallb
      labels:
        openshift.io/cluster-monitoring: "true"
      annotations:
        workload.openshift.io/allowed: management
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: metallb-operator-operatorgroup
      namespace: metallb
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: metallb-operator
      namespace: metallb
    spec:
      channel: "stable"
      name: metallb-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
  2. Save the file as metallb-operator-config.yaml.
  3. Enter the following command to apply the configuration:

    $ oc apply -f metallb-operator-config.yaml
  4. After the Operator is running, create the MetalLB instance:

    1. Create a YAML file that contains the configuration for the MetalLB instance:

      apiVersion: metallb.io/v1beta1
      kind: MetalLB
      metadata:
        name: metallb
        namespace: metallb
    2. Save the file as metallb-instance-config.yaml.
    3. Create the MetalLB instance by entering this command:

      $ oc apply -f metallb-instance-config.yaml
  5. Create an IPAddressPool resource with a single IP address. This IP address must be on the same subnet as the network that the cluster nodes use.

    1. Create a file, such as ipaddresspool.yaml, with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        namespace: metallb
        name: <ip_address_pool_name> 1
      spec:
        addresses:
          - <ingress_ip>-<ingress_ip> 2
        autoAssign: false
      1
      Specify the IPAddressPool resource name.
      2
      Specify the IP address for your environment. For example, 192.168.122.23.
    2. Apply the configuration for the IP address pool by entering the following command:

      $ oc apply -f ipaddresspool.yaml
  6. Create a L2 advertisement.

    1. Create a file, such as l2advertisement.yaml, with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: <l2_advertisement_name> 1
        namespace: metallb
      spec:
        ipAddressPools:
         - <ip_address_pool_name> 2
      1
      Specify the L2Advertisement resource name.
      2
      Specify the IPAddressPool resource name.
    2. Apply the configuration by entering the following command:

      $ oc apply -f l2advertisement.yaml
  7. After creating a service of the LoadBalancer type, MetalLB adds an external IP address for the service.

    1. Configure a new load balancer service that routes ingress traffic to the ingress deployment by creating a YAML file named metallb-loadbalancer-service.yaml:

      kind: Service
      apiVersion: v1
      metadata:
        annotations:
          metallb.universe.tf/address-pool: ingress-public-ip
        name: metallb-ingress
        namespace: openshift-ingress
      spec:
        ports:
          - name: http
            protocol: TCP
            port: 80
            targetPort: 80
          - name: https
            protocol: TCP
            port: 443
            targetPort: 443
        selector:
          ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
        type: LoadBalancer
    2. Save the metallb-loadbalancer-service.yaml file.
    3. Enter the following command to apply the YAML configuration:

      $ oc apply -f metallb-loadbalancer-service.yaml
    4. Enter the following command to reach the OpenShift Container Platform console:

      $ curl -kI https://console-openshift-console.apps.example.krnl.es

      Example output

      HTTP/1.1 200 OK

    5. Check the clusterversion and clusteroperator values to verify that everything is running. Enter the following command:

      $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co

      Example output

      NAME                                         VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
      clusterversion.config.openshift.io/version   4.x.y      True        False        3m32s   Cluster version is 4.x.y
      
      NAME                                                                             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
      clusteroperator.config.openshift.io/console                                      4.x.y     True        False         False      3m50s
      clusteroperator.config.openshift.io/ingress                                      4.x.y     True        False         False      53m

      Replace <4.x.y> with the supported OpenShift Container Platform version that you want to use, for example, 4.17.0-multi.

5.4.4. Enabling machine health checks on bare metal

You can enable machine health checks on bare metal to repair and replace unhealthy managed cluster nodes automatically. You must have additional agent machines that are ready to install in the managed cluster.

Consider the following limitations before enabling machine health checks:

  • You cannot modify the MachineHealthCheck object.
  • Machine health checks replace nodes only when at least two nodes stay in the False or Unknown status for more than 8 minutes.

After you enable machine health checks for the managed cluster nodes, the MachineHealthCheck object is created in your hosted cluster.

Procedure

To enable machine health checks in your hosted cluster, modify the NodePool resource. Complete the following steps:

  1. Verify that the spec.nodeDrainTimeout value in your NodePool resource is greater than 0s. Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace and <nodepool_name> with the node pool name. Run the following command:

    $ oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep nodeDrainTimeout

    Example output

    nodeDrainTimeout: 30s

  2. If the spec.nodeDrainTimeout value is not greater than 0s, modify the value by running the following command:

    $ oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec":{"nodeDrainTimeout": "30m"}}' --type=merge
  3. Enable machine health checks by setting the spec.management.autoRepair field to true in the NodePool resource. Run the following command:

    $ oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec": {"management": {"autoRepair":true}}}' --type=merge
  4. Verify that the NodePool resource is updated with the autoRepair: true value by running the following command:

    $ oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair

5.4.5. Disabling machine health checks on bare metal

To disable machine health checks for the managed cluster nodes, modify the NodePool resource.

Procedure

  1. Disable machine health checks by setting the spec.management.autoRepair field to false in the NodePool resource. Run the following command:

    $ oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec": {"management": {"autoRepair":false}}}' --type=merge
  2. Verify that the NodePool resource is updated with the autoRepair: false value by running the following command:

    $ oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair

Additional resources

5.5. Managing hosted control planes on IBM Power

After you deploy hosted control planes on IBM Power, you can manage a hosted cluster by completing the following tasks.

5.5.1. Creating an InfraEnv resource for hosted control planes on IBM Power

An InfraEnv is a environment where hosts that are starting the live ISO can join as agents. In this case, the agents are created in the same namespace as your hosted control plane.

You can create an InfraEnv resource for hosted control planes on 64-bit x86 bare metal for IBM Power compute nodes.

Procedure

  1. Create a YAML file to configure an InfraEnv resource. See the following example:

    apiVersion: agent-install.openshift.io/v1beta1
    kind: InfraEnv
    metadata:
      name: <hosted_cluster_name> \1
      namespace: <hosted_control_plane_namespace> \2
    spec:
      cpuArchitecture: ppc64le
      pullSecretRef:
        name: pull-secret
      sshAuthorizedKey: <path_to_ssh_public_key> 3
    1
    Replace <hosted_cluster_name> with the name of your hosted cluster.
    2
    Replace <hosted_control_plane_namespace> with the name of the hosted control plane namespace, for example, clusters-hosted.
    3
    Replace <path_to_ssh_public_key> with the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub.
  2. Save the file as infraenv-config.yaml.
  3. Apply the configuration by entering the following command:

    $ oc apply -f infraenv-config.yaml
  4. To fetch the URL to download the live ISO, which allows IBM Power machines to join as agents, enter the following command:

    $ oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json

5.5.2. Adding IBM Power agents to the InfraEnv resource

You can add agents by manually configuring the machine to start with the live ISO.

Procedure

  1. Download the live ISO and use it to start a bare metal or a virtual machine (VM) host. You can find the URL for the live ISO in the status.isoDownloadURL field, in the InfraEnv resource. At startup, the host communicates with the Assisted Service and registers as an agent in the same namespace as the InfraEnv resource.
  2. To list the agents and some of their properties, enter the following command:

    $ oc -n <hosted_control_plane_namespace> get agents

    Example output

    NAME                                   CLUSTER   APPROVED   ROLE          STAGE
    86f7ac75-4fc4-4b36-8130-40fa12602218                        auto-assign
    e57a637f-745b-496e-971d-1abbf03341ba                        auto-assign

  3. After each agent is created, you can optionally set the installation_disk_id and hostname for an agent:

    1. To set the installation_disk_id field for an agent, enter the following command:

      $ oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{"spec":{"installation_disk_id":"<installation_disk_id>","approved":true}}' --type merge
    2. To set the hostname field for an agent, enter the following command:

      $ oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{"spec":{"hostname":"<hostname>","approved":true}}' --type merge

Verification

  • To verify that the agents are approved for use, enter the following command:

    $ oc -n <hosted_control_plane_namespace> get agents

    Example output

    NAME                                   CLUSTER   APPROVED   ROLE          STAGE
    86f7ac75-4fc4-4b36-8130-40fa12602218             true       auto-assign
    e57a637f-745b-496e-971d-1abbf03341ba             true       auto-assign

5.5.3. Scaling the NodePool object for a hosted cluster on IBM Power

The NodePool object is created when you create a hosted cluster. By scaling the NodePool object, you can add more compute nodes to hosted control planes.

Procedure

  1. Run the following command to scale the NodePool object to two nodes:

    $ oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2

    The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through the transition phases in the following order:

    • binding
    • discovering
    • insufficient
    • installing
    • installing-in-progress
    • added-to-existing-cluster
  2. Run the following command to see the status of a specific scaled agent:

    $ oc -n <hosted_control_plane_namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'

    Example output

    BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound
    BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient

  3. Run the following command to see the transition phases:

    $ oc -n <hosted_control_plane_namespace> get agent

    Example output

    NAME                                   CLUSTER            APPROVED       ROLE          STAGE
    50c23cda-cedc-9bbd-bcf1-9b3a5c75804d   hosted-forwarder   true           auto-assign
    5e498cd3-542c-e54f-0c58-ed43e28b568a                      true           auto-assign
    da503cf1-a347-44f2-875c-4960ddb04091   hosted-forwarder   true           auto-assign

  4. Run the following command to generate the kubeconfig file to access the hosted cluster:

    $ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
  5. After the agents reach the added-to-existing-cluster state, verify that you can see the OpenShift Container Platform nodes by entering the following command:

    $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes

    Example output

    NAME                             STATUS   ROLES    AGE      VERSION
    worker-zvm-0.hostedn.example.com Ready    worker   5m41s    v1.24.0+3882f8f
    worker-zvm-1.hostedn.example.com Ready    worker   6m3s     v1.24.0+3882f8f

  6. Enter the following command to verify that two machines were created when you scaled up the NodePool object:

    $ oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io

    Example output

    NAME                                CLUSTER                  NODENAME                           PROVIDERID                                     PHASE     AGE   VERSION
    hosted-forwarder-79558597ff-5tbqp   hosted-forwarder-crqq5   worker-zvm-0.hostedn.example.com   agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d   Running   41h   4.15.0
    hosted-forwarder-79558597ff-lfjfk   hosted-forwarder-crqq5   worker-zvm-1.hostedn.example.com   agent://5e498cd3-542c-e54f-0c58-ed43e28b568a   Running   41h   4.15.0

  7. Run the following command to check the cluster version:

    $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion

    Example output

    NAME                                         VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
    clusterversion.config.openshift.io/version   4.15.0        True        False         40h     Cluster version is 4.15.0

  8. Run the following command to check the Cluster Operator status:

    $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators

    For each component of your cluster, the output shows the following Cluster Operator statuses:

    • NAME
    • VERSION
    • AVAILABLE
    • PROGRESSING
    • DEGRADED
    • SINCE
    • MESSAGE
Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.