Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 8. Creating infrastructure machine sets
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type
none
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
8.1. OpenShift Container Platform infrastructure components Link kopierenLink in die Zwischenablage kopiert!
Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing.
To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components:
- Kubernetes and OpenShift Container Platform control plane services
- The default router
- The integrated container image registry
- The HAProxy-based Ingress Controller
- The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects
- Cluster aggregated logging
- Red Hat Quay
- Red Hat OpenShift Data Foundation
- Red Hat Advanced Cluster Management for Kubernetes
- Red Hat Advanced Cluster Security for Kubernetes
- Red Hat OpenShift GitOps
- Red Hat OpenShift Pipelines
- Red Hat OpenShift Service Mesh
Any node that runs any other container, pod, or component is a worker node that your subscription must cover.
For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document.
To create an infrastructure node, you can use a machine set, label the node, or use a machine config pool.
8.2. Creating infrastructure machine sets for production environments Link kopierenLink in die Zwischenablage kopiert!
In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
8.2.1. Creating infrastructure machine sets for different clouds Link kopierenLink in die Zwischenablage kopiert!
Use the sample compute machine set for your cloud.
8.2.1.1. Sample YAML for a compute machine set custom resource on Alibaba Cloud Link kopierenLink in die Zwischenablage kopiert!
This sample YAML defines a compute machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with
node-role.kubernetes.io/infra: ""
In this sample,
<infrastructure_id>
<infra>
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
name: <infrastructure_id>-<infra>-<zone>
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone>
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone>
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: machine.openshift.io/v1
credentialsSecret:
name: alibabacloud-credentials
imageId: <image_id>
instanceType: <instance_type>
kind: AlibabaCloudMachineProviderConfig
ramRoleName: <infrastructure_id>-role-worker
regionId: <region>
resourceGroup:
id: <resource_group_id>
type: ID
securityGroups:
- tags:
- Key: Name
Value: <infrastructure_id>-sg-<role>
type: Tags
systemDisk:
category: cloud_essd
size: <disk_size>
tag:
- Key: kubernetes.io/cluster/<infrastructure_id>
Value: owned
userDataSecret:
name: <user_data_secret>
vSwitch:
tags:
- Key: Name
Value: <infrastructure_id>-vswitch-<zone>
type: Tags
vpcId: ""
zoneId: <zone>
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
- 1 5 7
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (
oc) installed, you can obtain the infrastructure ID by running the following command:$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster - 2 3 8 9
- Specify the
<infra>node label. - 4 6 10
- Specify the infrastructure ID,
<infra>node label, and zone. - 11
- Specify the image to use. Use an image from an existing default compute machine set for the cluster.
- 12
- Specify the instance type you want to use for the compute machine set.
- 13
- Specify the name of the RAM role to use for the compute machine set. Use the value that the installer populates in the default compute machine set.
- 14
- Specify the region to place machines on.
- 15
- Specify the resource group and type for the cluster. You can use the value that the installer populates in the default compute machine set, or specify a different one.
- 16 18 20
- Specify the tags to use for the compute machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default compute machine set it creates, as needed.
- 17
- Specify the type and size of the root disk. Use the
categoryvalue that the installer populates in the default compute machine set it creates. If required, specify a different value in gigabytes forsize. - 19
- Specify the name of the secret in the user data YAML file that is in the
openshift-machine-apinamespace. Use the value that the installer populates in the default compute machine set. - 21
- Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
- 22
- Specify a taint to prevent user workloads from being scheduled on infra nodes.Note
After adding the
taint on the infrastructure node, existing DNS pods running on that node are marked asNoSchedule. You must either delete or add toleration onmisscheduledmisscheduledDNS pods.
8.2.1.1.1. Machine set parameters for Alibaba Cloud usage statistics Link kopierenLink in die Zwischenablage kopiert!
The default compute machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the
securityGroups
tag
vSwitch
spec.template.spec.providerSpec.value
When creating compute machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the compute machine sets you create. You can also include additional tags as needed.
The following YAML snippets indicate which tags in the default compute machine sets are optional and which are required.
Tags in spec.template.spec.providerSpec.value.securityGroups
spec:
template:
spec:
providerSpec:
value:
securityGroups:
- tags:
- Key: kubernetes.io/cluster/<infrastructure_id>
Value: owned
- Key: GISV
Value: ocp
- Key: sigs.k8s.io/cloud-provider-alibaba/origin
Value: ocp
- Key: Name
Value: <infrastructure_id>-sg-<role>
type: Tags
Tags in spec.template.spec.providerSpec.value.tag
spec:
template:
spec:
providerSpec:
value:
tag:
- Key: kubernetes.io/cluster/<infrastructure_id>
Value: owned
- Key: GISV
Value: ocp
- Key: sigs.k8s.io/cloud-provider-alibaba/origin
Value: ocp
Tags in spec.template.spec.providerSpec.value.vSwitch
spec:
template:
spec:
providerSpec:
value:
vSwitch:
tags:
- Key: kubernetes.io/cluster/<infrastructure_id>
Value: owned
- Key: GISV
Value: ocp
- Key: sigs.k8s.io/cloud-provider-alibaba/origin
Value: ocp
- Key: Name
Value: <infrastructure_id>-vswitch-<zone>
type: Tags
8.2.1.2. Sample YAML for a compute machine set custom resource on AWS Link kopierenLink in die Zwischenablage kopiert!
This sample YAML defines a compute machine set that runs in the
us-east-1a
node-role.kubernetes.io/infra: ""
In this sample,
<infrastructure_id>
<infra>
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
name: <infrastructure_id>-infra-<zone>
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone>
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: infra
machine.openshift.io/cluster-api-machine-type: infra
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone>
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
ami:
id: ami-046fe691f52a953f9
apiVersion: machine.openshift.io/v1beta1
blockDevices:
- ebs:
iops: 0
volumeSize: 120
volumeType: gp2
credentialsSecret:
name: aws-cloud-credentials
deviceIndex: 0
iamInstanceProfile:
id: <infrastructure_id>-worker-profile
instanceType: m6i.large
kind: AWSMachineProviderConfig
placement:
availabilityZone: <zone>
region: <region>
securityGroups:
- filters:
- name: tag:Name
values:
- <infrastructure_id>-worker-sg
subnet:
filters:
- name: tag:Name
values:
- <infrastructure_id>-private-<zone>
tags:
- name: kubernetes.io/cluster/<infrastructure_id>
value: owned
- name: <custom_tag_name>
value: <custom_tag_value>
userDataSecret:
name: worker-user-data
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
- 1 3 5 11 14 16
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster - 2 4 8
- Specify the infrastructure ID,
infrarole node label, and zone. - 6 7 9
- Specify the
infrarole node label. - 10
- Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region.
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> - 17 18
- Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a
name:valuepair ofEmail:admin-email@example.com.NoteCustom tags can also be specified during installation in the
file. If theinstall-config.ymlfile and the machine set include a tag with the sameinstall-config.ymldata, the value for the tag from the machine set takes priority over the value for the tag in thenamefile.install-config.yml - 12
- Specify the zone, for example,
us-east-1a. - 13
- Specify the region, for example,
us-east-1. - 15
- Specify the infrastructure ID and zone.
- 19
- Specify a taint to prevent user workloads from being scheduled on
infranodes.NoteAfter adding the
taint on the infrastructure node, existing DNS pods running on that node are marked asNoSchedule. You must either delete or add toleration onmisscheduledmisscheduledDNS pods.
Machine sets running on AWS support non-guaranteed Spot Instances. You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding
spotMarketOptions
MachineSet
8.2.1.3. Sample YAML for a compute machine set custom resource on Azure Link kopierenLink in die Zwischenablage kopiert!
This sample YAML defines a compute machine set that runs in the
1
node-role.kubernetes.io/infra: ""
In this sample,
<infrastructure_id>
infra
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: infra
machine.openshift.io/cluster-api-machine-type: infra
name: <infrastructure_id>-infra-<region>
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region>
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: infra
machine.openshift.io/cluster-api-machine-type: infra
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region>
spec:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-machineset: <machineset_name>
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: azureproviderconfig.openshift.io/v1beta1
credentialsSecret:
name: azure-cloud-credentials
namespace: openshift-machine-api
image:
offer: ""
publisher: ""
resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest
sku: ""
version: ""
internalLoadBalancer: ""
kind: AzureMachineProviderSpec
location: <region>
managedIdentity: <infrastructure_id>-identity
metadata:
creationTimestamp: null
natRule: null
networkResourceGroup: ""
osDisk:
diskSizeGB: 128
managedDisk:
storageAccountType: Premium_LRS
osType: Linux
publicIP: false
publicLoadBalancer: ""
resourceGroup: <infrastructure_id>-rg
sshPrivateKey: ""
sshPublicKey: ""
tags:
<custom_tag_name_1>: <custom_tag_value_1>
<custom_tag_name_2>: <custom_tag_value_2>
subnet: <infrastructure_id>-<role>-subnet
userDataSecret:
name: worker-user-data
vmSize: Standard_D4s_v3
vnet: <infrastructure_id>-vnet
zone: "1"
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
- 1
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure clusterYou can obtain the subnet by running the following command:
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1You can obtain the vnet by running the following command:
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 - 2
- Specify the
infranode label. - 3
- Specify the infrastructure ID,
infranode label, and region. - 4
- Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Using the Azure Marketplace offering".
- 5
- Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a
-gen2suffix, while V1 images have the same name without the suffix. - 6
- Specify the region to place machines on.
- 7
- Optional: Specify custom tags in your machine set. Provide the tag name in
<custom_tag_name>field and the corresponding tag value in<custom_tag_value>field. - 8
- Specify the zone within your region to place machines on. Ensure that your region supports the zone that you specify.Important
If your region supports availability zones, you must specify the zone. Specifying the zone avoids volume node affinity failure when a pod requires a persistent volume attachment. To do this, you can create a compute machine set for each zone in the same region.
- 9
- Specify a taint to prevent user workloads from being scheduled on infra nodes.Note
After adding the
taint on the infrastructure node, existing DNS pods running on that node are marked asNoSchedule. You must either delete or add toleration onmisscheduledmisscheduledDNS pods.
Machine sets running on Azure support non-guaranteed Spot VMs. You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding
spotVMOptions
MachineSet
8.2.1.4. Sample YAML for a compute machine set custom resource on Azure Stack Hub Link kopierenLink in die Zwischenablage kopiert!
This sample YAML defines a compute machine set that runs in the
1
node-role.kubernetes.io/infra: ""
In this sample,
<infrastructure_id>
<infra>
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
name: <infrastructure_id>-infra-<region>
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region>
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region>
spec:
metadata:
creationTimestamp: null
labels:
node-role.kubernetes.io/infra: ""
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
providerSpec:
value:
apiVersion: machine.openshift.io/v1beta1
availabilitySet: <availability_set>
credentialsSecret:
name: azure-cloud-credentials
namespace: openshift-machine-api
image:
offer: ""
publisher: ""
resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id>
sku: ""
version: ""
internalLoadBalancer: ""
kind: AzureMachineProviderSpec
location: <region>
managedIdentity: <infrastructure_id>-identity
metadata:
creationTimestamp: null
natRule: null
networkResourceGroup: ""
osDisk:
diskSizeGB: 128
managedDisk:
storageAccountType: Premium_LRS
osType: Linux
publicIP: false
publicLoadBalancer: ""
resourceGroup: <infrastructure_id>-rg
sshPrivateKey: ""
sshPublicKey: ""
subnet: <infrastructure_id>-<role>-subnet
userDataSecret:
name: worker-user-data
vmSize: Standard_DS4_v2
vnet: <infrastructure_id>-vnet
zone: "1"
- 1 5 7 14 16 17 18 21
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure clusterYou can obtain the subnet by running the following command:
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1You can obtain the vnet by running the following command:
$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 - 2 3 8 9 11 19 20
- Specify the
<infra>node label. - 4 6 10
- Specify the infrastructure ID,
<infra>node label, and region. - 12
- Specify a taint to prevent user workloads from being scheduled on infra nodes.Note
After adding the
taint on the infrastructure node, existing DNS pods running on that node are marked asNoSchedule. You must either delete or add toleration onmisscheduledmisscheduledDNS pods. - 15
- Specify the region to place machines on.
- 13
- Specify the availability set for the cluster.
- 22
- Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs.
8.2.1.5. Sample YAML for a compute machine set custom resource on IBM Cloud Link kopierenLink in die Zwischenablage kopiert!
This sample YAML defines a compute machine set that runs in a specified IBM Cloud® zone in a region and creates nodes that are labeled with
node-role.kubernetes.io/infra: ""
In this sample,
<infrastructure_id>
<infra>
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
name: <infrastructure_id>-<infra>-<region>
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region>
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region>
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1
credentialsSecret:
name: ibmcloud-credentials
image: <infrastructure_id>-rhcos
kind: IBMCloudMachineProviderSpec
primaryNetworkInterface:
securityGroups:
- <infrastructure_id>-sg-cluster-wide
- <infrastructure_id>-sg-openshift-net
subnet: <infrastructure_id>-subnet-compute-<zone>
profile: <instance_profile>
region: <region>
resourceGroup: <resource_group>
userDataSecret:
name: <role>-user-data
vpc: <vpc_name>
zone: <zone>
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
- 1 5 7
- The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster - 2 3 8 9 16
- The
<infra>node label. - 4 6 10
- The infrastructure ID,
<infra>node label, and region. - 11
- The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation.
- 12
- The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify.
- 13
- Specify the IBM Cloud® instance profile.
- 14
- Specify the region to place machines on.
- 15
- The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID.
- 17
- The VPC name.
- 18
- Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
- 19
- The taint to prevent user workloads from being scheduled on infra nodes.Note
After adding the
taint on the infrastructure node, existing DNS pods running on that node are marked asNoSchedule. You must either delete or add toleration onmisscheduledmisscheduledDNS pods.
8.2.1.6. Sample YAML for a compute machine set custom resource on Google Cloud Link kopierenLink in die Zwischenablage kopiert!
This sample YAML defines a compute machine set that runs in Google Cloud and creates nodes that are labeled with
node-role.kubernetes.io/infra: ""
infra
8.2.1.6.1. Values obtained by using the OpenShift CLI Link kopierenLink in die Zwischenablage kopiert!
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.
- Infrastructure ID
The
string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:<infrastructure_id>$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster- Image path
The
string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:<path_to_image>$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a
Sample Google Cloud MachineSet values
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
name: <infrastructure_id>-w-a
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: gcpprovider.openshift.io/v1beta1
canIPForward: false
credentialsSecret:
name: gcp-cloud-credentials
deletionProtection: false
disks:
- autoDelete: true
boot: true
image: <path_to_image>
labels: null
sizeGb: 128
type: pd-ssd
gcpMetadata:
- key: <custom_metadata_key>
value: <custom_metadata_value>
kind: GCPMachineProviderSpec
machineType: n1-standard-4
metadata:
creationTimestamp: null
networkInterfaces:
- network: <infrastructure_id>-network
subnetwork: <infrastructure_id>-worker-subnet
projectID: <project_name>
region: us-central1
serviceAccounts:
- email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/cloud-platform
tags:
- <infrastructure_id>-worker
userDataSecret:
name: worker-user-data
zone: us-central1-a
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
- 1
- For
<infrastructure_id>, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. - 2
- For
<infra>, specify the<infra>node label. - 3
- Specify the path to the image that is used in current compute machine sets.
To use a Google Cloud Marketplace image, specify the offer to use:
-
OpenShift Container Platform:
https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 -
OpenShift Platform Plus:
https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 -
OpenShift Kubernetes Engine:
https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736
-
OpenShift Container Platform:
- 4
- Optional: Specify custom metadata in the form of a
key:valuepair. For example use cases, see the Google Cloud documentation for setting custom metadata. - 5
- For
<project_name>, specify the name of the Google Cloud project that you use for your cluster. - 6
- Specifies a single service account. Multiple service accounts are not supported.
- 7
- Specify a taint to prevent user workloads from being scheduled on infra nodes.Note
After adding the
taint on the infrastructure node, existing DNS pods running on that node are marked asNoSchedule. You must either delete or add toleration onmisscheduledmisscheduledDNS pods.
Machine sets running on Google Cloud support non-guaranteed preemptible VM instances. You can save on costs by using preemptible VM instances at a lower price compared to normal instances on Google Cloud. You can configure preemptible VM instances by adding
preemptible
MachineSet
8.2.1.7. Sample YAML for a compute machine set custom resource on Nutanix Link kopierenLink in die Zwischenablage kopiert!
This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with
node-role.kubernetes.io/infra: ""
In this sample,
<infrastructure_id>
<infra>
8.2.1.7.1. Values obtained by using the OpenShift CLI Link kopierenLink in die Zwischenablage kopiert!
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI (
oc
- Infrastructure ID
The
string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:<infrastructure_id>$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
name: <infrastructure_id>-<infra>-<zone>
namespace: openshift-machine-api
annotations:
machine.openshift.io/memoryMb: "16384"
machine.openshift.io/vCPU: "4"
spec:
replicas: 3
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone>
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone>
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: machine.openshift.io/v1
bootType: ""
categories:
- key: <category_name>
value: <category_value>
cluster:
type: uuid
uuid: <cluster_uuid>
credentialsSecret:
name: nutanix-credentials
image:
name: <infrastructure_id>-rhcos
type: name
kind: NutanixMachineProviderConfig
memorySize: 16Gi
project:
type: name
name: <project_name>
subnets:
- type: uuid
uuid: <subnet_uuid>
systemDiskSize: 120Gi
userDataSecret:
name: <user_data_secret>
vcpuSockets: 4
vcpusPerSocket: 1
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
- 1
- For
<infrastructure_id>, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. - 2
- Specify the
<infra>node label. - 3
- Specify the infrastructure ID,
<infra>node label, and zone. - 4
- Annotations for the cluster autoscaler.
- 5
- Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment. Valid values are
Legacy,SecureBoot, orUEFI. The default isLegacy.NoteYou must use the
boot type in OpenShift Container Platform 4.14.Legacy - 6
- Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires
keyandvalueparameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management. - 7
- Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is
uuid, so there is auuidstanza. - 8
- Specify the image to use. Use an image from an existing default compute machine set for the cluster.
- 9
- Specify the amount of memory for the cluster in Gi.
- 10
- Specify the Nutanix project that you use for your cluster. In this example, the project type is
name, so there is anamestanza. - 11
- Specify the size of the system disk in Gi.
- 12
- Specify the name of the secret in the user data YAML file that is in the
openshift-machine-apinamespace. Use the value that installation program populates in the default compute machine set. - 13
- Specify the number of vCPU sockets.
- 14
- Specify the number of vCPUs per socket.
- 15
- Specify a taint to prevent user workloads from being scheduled on infra nodes.Note
After adding the
taint on the infrastructure node, existing DNS pods running on that node are marked asNoSchedule. You must either delete or add toleration onmisscheduledmisscheduledDNS pods.
8.2.1.8. Sample YAML for a compute machine set custom resource on RHOSP Link kopierenLink in die Zwischenablage kopiert!
This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with
node-role.kubernetes.io/infra: ""
In this sample,
<infrastructure_id>
<infra>
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
name: <infrastructure_id>-infra
namespace: openshift-machine-api
spec:
replicas: <number_of_replicas>
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra>
machine.openshift.io/cluster-api-machine-type: <infra>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra
spec:
metadata:
creationTimestamp: null
labels:
node-role.kubernetes.io/infra: ""
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
providerSpec:
value:
apiVersion: openstackproviderconfig.openshift.io/v1alpha1
cloudName: openstack
cloudsSecret:
name: openstack-cloud-credentials
namespace: openshift-machine-api
flavor: <nova_flavor>
image: <glance_image_name_or_location>
serverGroupID: <optional_UUID_of_server_group>
kind: OpenstackProviderSpec
networks:
- filter: {}
subnets:
- filter:
name: <subnet_name>
tags: openshiftClusterID=<infrastructure_id>
primarySubnet: <rhosp_subnet_UUID>
securityGroups:
- filter: {}
name: <infrastructure_id>-worker
serverMetadata:
Name: <infrastructure_id>-worker
openshiftClusterID: <infrastructure_id>
tags:
- openshiftClusterID=<infrastructure_id>
trunk: true
userDataSecret:
name: worker-user-data
availabilityZone: <optional_openstack_availability_zone>
- 1 5 7 14 16 17 18 19
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster - 2 3 8 9 20
- Specify the
<infra>node label. - 4 6 10
- Specify the infrastructure ID and
<infra>node label. - 11
- Specify a taint to prevent user workloads from being scheduled on infra nodes.Note
After adding the
taint on the infrastructure node, existing DNS pods running on that node are marked asNoSchedule. You must either delete or add toleration onmisscheduledmisscheduledDNS pods. - 12
- To set a server group policy for the MachineSet, enter the value that is returned from creating a server group. For most deployments,
anti-affinityorsoft-anti-affinitypolicies are recommended. - 13
- Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the
primarySubnetvalue. - 15
- Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of
machinesSubnetin theinstall-config.yamlfile.
8.2.1.9. Sample YAML for a compute machine set custom resource on vSphere Link kopierenLink in die Zwischenablage kopiert!
This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with
node-role.kubernetes.io/infra: ""
In this sample,
<infrastructure_id>
infra
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
name: <infrastructure_id>-infra
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: infra
machine.openshift.io/cluster-api-machine-type: infra
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra
spec:
metadata:
creationTimestamp: null
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: vsphereprovider.openshift.io/v1beta1
credentialsSecret:
name: vsphere-cloud-credentials
diskGiB: 120
kind: VSphereMachineProviderSpec
memoryMiB: 8192
metadata:
creationTimestamp: null
network:
devices:
- networkName: "<vm_network_name>"
numCPUs: 4
numCoresPerSocket: 1
snapshot: ""
template: <vm_template_name>
userDataSecret:
name: worker-user-data
workspace:
datacenter: <vcenter_datacenter_name>
datastore: <vcenter_datastore_name>
folder: <vcenter_vm_folder_path>
resourcepool: <vsphere_resource_pool>
server: <vcenter_server_ip>
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
- 1
- Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (
oc) installed, you can obtain the infrastructure ID by running the following command:$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster - 2
- Specify the infrastructure ID and
infranode label. - 3
- Specify the
infranode label. - 4
- Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster.
- 5
- Specify the vSphere VM template to use, such as
user-5ddjd-rhcos. - 6
- Specify the vCenter Datacenter to deploy the compute machine set on.
- 7
- Specify the vCenter Datastore to deploy the compute machine set on.
- 8
- Specify the path to the vSphere VM folder in vCenter, such as
/dc1/vm/user-inst-5ddjd. - 9
- Specify the vSphere resource pool for your VMs.
- 10
- Specify the vCenter server IP or fully qualified domain name.
- 11
- Specify a taint to prevent user workloads from being scheduled on infra nodes.Note
After adding the
taint on the infrastructure node, existing DNS pods running on that node are marked asNoSchedule. You must either delete or add toleration onmisscheduledmisscheduledDNS pods.
8.2.2. Creating a compute machine set Link kopierenLink in die Zwischenablage kopiert!
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI ().
oc -
Log in to as a user with
ocpermission.cluster-admin
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
.<file_name>.yamlEnsure that you set the
and<clusterID>parameter values.<role>Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-apiExample output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55mTo view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yamlExample output
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id>1 name: <infrastructure_id>-<role>2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec:3 ...- 1
- The cluster infrastructure ID.
- 2
- A default node label.Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
andworkertype machines.infra - 3
- The values in the
<providerSpec>section of the compute machine set CR are platform-specific. For more information about<providerSpec>parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
CR by running the following command:MachineSet$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-apiExample output
NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55mWhen the new compute machine set is available, the
andDESIREDvalues match. If the compute machine set is not available, wait a few minutes and run the command again.CURRENT
8.2.3. Creating an infrastructure node Link kopierenLink in die Zwischenablage kopiert!
See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API.
Requirements of the cluster dictate that infrastructure (infra) nodes, be provisioned. The installation program provisions only control plane and worker nodes. Worker nodes can be designated as infrastructure nodes through labeling. You can then use taints and tolerations to move appropriate workloads to the infrastructure nodes. For more information, see "Moving resources to infrastructure machine sets".
You can optionally create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces and creates an intersection with any existing node selectors on a pod, which additionally constrains the pod’s selector.
If the default node selector key conflicts with the key of a pod’s label, then the default node selector is not applied.
However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as
node-role.kubernetes.io/infra=""
node-role.kubernetes.io/master=""
You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts.
Procedure
Add a label to the worker nodes that you want to act as infrastructure nodes:
$ oc label node <node-name> node-role.kubernetes.io/infra=""Check to see if applicable nodes now have the
role:infra$ oc get nodesOptional: Create a default cluster-wide node selector:
Edit the
object:Scheduler$ oc edit scheduler clusterAdd the
field with the appropriate node selector:defaultNodeSelectorapiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=""1 # ...- 1
- This example node selector deploys pods on infrastructure nodes by default.
- Save the file to apply the changes.
You can now move infrastructure resources to the new infrastructure nodes. Also, remove any workloads that you do not want, or that do not belong, on the new infrastructure node. See the list of workloads supported for use on infrastructure nodes in "OpenShift Container Platform infrastructure components".
8.2.4. Creating a machine config pool for infrastructure machines Link kopierenLink in die Zwischenablage kopiert!
If you need infrastructure machines to have dedicated configurations, you must create an infra pool.
Procedure
Add a label to the node you want to assign as the infra node with a specific label:
$ oc label node <node_name> <label>$ oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=Create a machine config pool that contains both the worker role and your custom role as machine config selector:
$ cat infra.mcp.yamlExample output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: ""2 NoteCustom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool.
After you have the YAML file, you can create the machine config pool:
$ oc create -f infra.mcp.yamlCheck the machine configs to ensure that the infrastructure configuration rendered successfully:
$ oc get machineconfigExample output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13dYou should see a new machine config, with the
prefix.rendered-infra-*Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as
. Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes.infraNoteAfter you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration.
Create a machine config:
$ cat infra.mc.yamlExample output
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra- 1
- Add the label you added to the node as a
nodeSelector.
Apply the machine config to the infra-labeled nodes:
$ oc create -f infra.mc.yaml
Confirm that your new machine config pool is available:
$ oc get mcpExample output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91mIn this example, a worker node was changed to an infra node.
8.3. Assigning machine set resources to infrastructure nodes Link kopierenLink in die Zwischenablage kopiert!
After creating an infrastructure machine set, the
worker
infra
infra
worker
However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control.
8.3.1. Binding infrastructure node workloads using taints and tolerations Link kopierenLink in die Zwischenablage kopiert!
If you have an infrastructure node that has the
infra
worker
It is recommended that you preserve the dual
infra,worker
worker
master
worker
worker
infra
Prerequisites
-
Configure additional objects in your OpenShift Container Platform cluster.
MachineSet
Procedure
Add a taint to the infrastructure node to prevent scheduling user workloads on it:
Determine if the node has the taint:
$ oc describe nodes <node_name>Sample output
oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra=reserved:NoSchedule ...This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the next step.
If you have not configured a taint to prevent scheduling user workloads on it:
$ oc adm taint nodes <node_name> <key>=<value>:<effect>For example:
$ oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoScheduleTipYou can alternatively edit the pod specification to add the taint:
apiVersion: v1 kind: Node metadata: name: node1 # ... spec: taints: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule # ...These examples place a taint on
that has thenode1key and thenode-role.kubernetes.io/infrataint effect. Nodes with theNoScheduleeffect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.NoScheduleIf you added a
taint to the infrastructure node, any pods that are controlled by a daemon set on that node are marked asNoSchedule. You must either delete the pods or add a toleration to the pods as shown in the Red Hat Knowledgebase solution add toleration onmisscheduledmisscheduledDNS pods. Note that you cannot add a toleration to a daemon set object that is managed by an operator.NoteIf a descheduler is used, pods violating node taints could be evicted from the cluster.
Add tolerations to the pods that you want to schedule on the infrastructure node, such as the router, registry, and monitoring workloads. Referencing the previous examples, add the following tolerations to the
object specification:PodapiVersion: v1 kind: Pod metadata: annotations: # ... spec: # ... tolerations: - key: node-role.kubernetes.io/infra1 value: reserved2 effect: NoSchedule3 operator: Equal4 This toleration matches the taint created by the
command. A pod with this toleration can be scheduled onto the infrastructure node.oc adm taintNoteMoving pods for an Operator installed via OLM to an infrastructure node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.
- Schedule the pod to the infrastructure node by using a scheduler. See the documentation for "Controlling pod placement using the scheduler" for details.
- Remove any workloads that you do not want, or that do not belong, on the new infrastructure node. See the list of workloads supported for use on infrastructure nodes in "OpenShift Container Platform infrastructure components".
8.4. Moving resources to infrastructure machine sets Link kopierenLink in die Zwischenablage kopiert!
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown:
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
name: cluster
# ...
spec:
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
- 1
- Add a
nodeSelectorparameter with the appropriate value to the component you want to move. You can use anodeSelectorin the format shown or use<key>: <value>pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label.
8.4.1. Moving the router Link kopierenLink in die Zwischenablage kopiert!
You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node.
Prerequisites
- Configure additional compute machine sets in your OpenShift Container Platform cluster.
Procedure
View the
custom resource for the router Operator:IngressController$ oc get ingresscontroller default -n openshift-ingress-operator -o yamlThe command output resembles the following text:
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=defaultEdit the
resource and change theingresscontrollerto use thenodeSelectorlabel:infra$ oc edit ingresscontroller default -n openshift-ingress-operatorapiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: "2025-03-26T21:15:43Z" finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default # ... spec: nodePlacement: nodeSelector:1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved # ...- 1
- Add a
nodeSelectorparameter with the appropriate value to the component you want to move. You can use anodeSelectorparameter in the format shown or use<key>: <value>pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration.
Confirm that the router pod is running on the
node.infraView the list of router pods and note the node name of the running pod:
$ oc get pod -n openshift-ingress -o wideExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>In this example, the running pod is on the
node.ip-10-0-217-226.ec2.internalView the node status of the running pod:
$ oc get node <node_name>1 - 1
- Specify the
<node_name>that you obtained from the pod list.
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.27.3Because the role list includes
, the pod is running on the correct node.infra
8.4.2. Moving the default registry Link kopierenLink in die Zwischenablage kopiert!
You configure the registry Operator to deploy its pods to different nodes.
Prerequisites
- Configure additional compute machine sets in your OpenShift Container Platform cluster.
Procedure
View the
object:config/instance$ oc get configs.imageregistry.operator.openshift.io/cluster -o yamlExample output
apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ...Edit the
object:config/instance$ oc edit configs.imageregistry.operator.openshift.io/clusterapiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster # ... spec: logLevel: Normal managementState: Managed nodeSelector:1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved- 1
- Add a
nodeSelectorparameter with the appropriate value to the component you want to move. You can use anodeSelectorparameter in the format shown or use<key>: <value>pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Verify the registry pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry pod is located:
$ oc get pods -o wide -n openshift-image-registryConfirm the node has the label you specified:
$ oc describe node <node_name>Review the command output and confirm that
is in thenode-role.kubernetes.io/infralist.LABELS
8.4.3. Moving the monitoring solution Link kopierenLink in die Zwischenablage kopiert!
The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.
Prerequisites
-
You have access to the cluster as a user with the cluster role.
cluster-admin -
You have created the
cluster-monitoring-configobject.ConfigMap -
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
config map and change thecluster-monitoring-configto use thenodeSelectorlabel:infra$ oc edit configmap cluster-monitoring-config -n openshift-monitoringapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector:1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule- 1
- Add a
nodeSelectorparameter with the appropriate value to the component you want to move. You can use anodeSelectorparameter in the format shown or use<key>: <value>pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration.
Watch the monitoring pods move to the new machines:
$ watch 'oc get pod -n openshift-monitoring -o wide'If a component has not moved to the
node, delete the pod with this component:infra$ oc delete pod -n openshift-monitoring <pod>The component from the deleted pod is re-created on the
node.infra