Chapter 6. Creating infrastructure MachineSets
You can create a MachineSet to host only infrastructure components. You apply specific Kubernetes labels to these Machines and then update the infrastructure components to run on only those Machines. These infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment.
Unlike earlier versions of OpenShift Container Platform, you cannot move the infrastructure components to the master Machines. To move the components, you must create a new MachineSet.
6.1. OpenShift Container Platform infrastructure components
The following OpenShift Container Platform components are infrastructure components:
- Kubernetes and OpenShift Container Platform control plane services that run on masters
- The default router
- The container image registry
- The cluster metrics collection, or monitoring service
- Cluster aggregated logging
- Service brokers
Any node that runs any other container, pod, or component is a worker node that your subscription must cover.
6.2. Creating infrastructure MachineSets for production environments
In a production deployment, deploy at least three MachineSets to hold infrastructure components. Both the logging aggregation solution and the service mesh deploy Elasticsearch, and Elasticsearch requires three instances that are installed on different nodes. For high availability, install deploy these nodes to different availability zones. Since you need different MachineSets for each availability zone, create at least three MachineSets.
6.2.1. Sample YAML for a MachineSet Custom Resource
This sample YAML defines a MachineSet that runs in the us-east-1a
Amazon Web Services (AWS) region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
In this sample, <clusterID>
is the cluster ID that you set when you provisioned the cluster and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <clusterID> 1 name: <clusterID>-<role>-us-east-1a 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <clusterID> 3 machine.openshift.io/cluster-api-machineset: <clusterID>-<role>-us-east-1a 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <clusterID> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <clusterID>-<role>-us-east-1a 8 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <clusterID>-worker-profile 11 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a region: us-east-1 securityGroups: - filters: - name: tag:Name values: - <clusterID>-worker-sg 12 subnet: filters: - name: tag:Name values: - <clusterID>-private-us-east-1a 13 tags: - name: kubernetes.io/cluster/<clusterID> 14 value: owned userDataSecret: name: worker-user-data
- 1 3 5 11 12 13 14
- Specify the cluster ID that you set when you provisioned the cluster.
- 2 4 8
- Specify the cluster ID and node label.
- 6 7 9
- Specify the node label to add.
- 10
- Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your Amazon Web Services (AWS) zone for your OpenShift Container Platform nodes.
6.2.2. Creating a MachineSet
In addition to the ones created by the installation program, you can create your own MachineSets to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift Command-line Interface (CLI), commonly known as
oc
-
Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the MachineSet Custom Resource sample, as shown, and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.If you are not sure about which value to set for an specific field, you can check an existing MachineSet from your cluster.
$ oc get machinesets -n openshift-machine-api NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
Check values of an specific MachineSet:
$ oc get machineset <machineset_name> -n \ openshift-machine-api -o yaml .... template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
Create the new
MachineSet
:$ oc create -f <file_name>.yaml
View the list of MachineSets:
$ oc get machineset -n openshift-machine-api NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new MachineSet is available, the
DESIRED
andCURRENT
values match. If the MachineSet is not available, wait a few minutes and run the command again.After the new MachineSet is available, check status of the machine and the node that it references:
$ oc describe machine <name> -n openshift-machine-api
For example:
$ oc describe machine agl030519-vplxk-infra-us-east-1a -n openshift-machine-api status: addresses: - address: 10.0.133.18 type: InternalIP - address: "" type: ExternalDNS - address: ip-10-0-133-18.ec2.internal type: InternalDNS lastUpdated: "2019-05-03T10:38:17Z" nodeRef: kind: Node name: ip-10-0-133-18.ec2.internal uid: 71fb8d75-6d8f-11e9-9ff3-0e3f103c7cd8 providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: "2019-05-03T10:34:31Z" lastTransitionTime: "2019-05-03T10:34:31Z" message: machine successfully created reason: MachineCreationSucceeded status: "True" type: MachineCreation instanceId: i-09ca0701454124294 instanceState: running kind: AWSMachineProviderStatus
View the new node and confirm that the new node has the label that you specified:
$ oc get node <node_name> --show-labels
Review the command output and confirm that
node-role.kubernetes.io/<your_label>
is in theLABELS
list.
Any change to a MachineSet is not applied to existing machines owned by the MachineSet. For example, labels edited or added to an existing MachineSet are not propagated to existing machines and Nodes associated with the MachineSet.
Next steps
If you need MachineSets in other availability zones, repeat this process to create more MachineSets.
6.3. Moving resources to infrastructure MachineSets
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure MachineSets that you created.
6.3.1. Moving the router
You can deploy the router Pod to a different MachineSet. By default, the Pod is displayed to a worker node.
Prerequisites
- Configure additional MachineSets in your OpenShift Container Platform cluster.
Procedure
View the
IngressController
Custom Resource for the router Operator:$ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
The command output resembles the following text:
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
Edit the
ingresscontroller
resource and change thenodeSelector
to use theinfra
label:$ oc edit ingresscontroller default -n openshift-ingress-operator -o yaml
Add the
nodeSelector
stanza that references theinfra
label to thespec
section, as shown:spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: ""
Confirm that the router pod is running on the
infra
node.View the list of router pods and note the node name of the running pod:
$ oc get pod -n openshift-ingress -o wide AME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
In this example, the running pod is on the
ip-10-0-217-226.ec2.internal
node.View the node status of the running pod:
$ oc get node <node_name> 1 NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.11.0+406fc897d8
- 1
- Specify the
<node_name>
that you obtained from the pod list.
Because the role list includes
infra
, the pod is running on the correct node.
6.3.2. Moving the default registry
You configure the registry Operator to deploy its pods to different nodes.
Prerequisites
- Configure additional MachineSets in your OpenShift Container Platform cluster.
Procedure
View the
config/instance
object:$ oc get config/cluster -o yaml
The output resembles the following text:
apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ...
Edit the
config/instance
object:$ oc edit config/cluster
Add the following lines of text the
spec
section of the object:nodeSelector: node-role.kubernetes.io/infra: ""
After you save and exit you can see the registry pod being moved to the infrastructure node.
6.3.3. Moving the monitoring solution
By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom ConfigMap.
Procedure
Save the following ConfigMap definition as the
cluster-monitoring-configmap.yaml
file:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: "" prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" grafana: nodeSelector: node-role.kubernetes.io/infra: "" k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" telemeterClient: nodeSelector: node-role.kubernetes.io/infra: ""
Running this ConfigMap forces the components of the monitoring stack to redeploy to infrastructure nodes.
Apply the new ConfigMap:
$ oc create -f cluster-monitoring-configmap.yaml
Watch the monitoring Pods move to the new machines:
$ watch 'oc get pod -n openshift-monitoring -o wide'
6.3.4. Moving the cluster logging resources
You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
You should set your MachineSet to use at least 6 replicas.
Prerequisites
- Cluster logging and Elasticsearch must be installed. These features are not installed by default.
Procedure
Edit the Cluster Logging Custom Resource in the
openshift-logging
project:$ oc edit ClusterLogging instance
apiVersion: logging.openshift.io/v1 kind: ClusterLogging .... spec: collection: logs: fluentd: resources: null rsyslog: resources: null type: fluentd curation: curator: nodeSelector: 1 node-role.kubernetes.io/infra: '' resources: null schedule: 30 3 * * * type: curator logStore: elasticsearch: nodeCount: 3 nodeSelector: 2 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 3 node-role.kubernetes.io/infra: '' 4 proxy: resources: null replicas: 1 resources: null type: kibana ....