This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.6.4. Moving resources to infrastructure machine sets
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.
6.4.1. Moving the router 复制链接链接已复制到粘贴板!
You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.
Prerequisites
- Configure additional machine sets in your OpenShift Container Platform cluster.
Procedure
View the
IngressController
custom resource for the router Operator:oc get ingresscontroller default -n openshift-ingress-operator -o yaml
$ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command output resembles the following text:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
ingresscontroller
resource and change thenodeSelector
to use theinfra
label:oc edit ingresscontroller default -n openshift-ingress-operator
$ oc edit ingresscontroller default -n openshift-ingress-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
nodeSelector
stanza that references theinfra
label to thespec
section, as shown:spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: ""
spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: ""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the router pod is running on the
infra
node.View the list of router pods and note the node name of the running pod:
oc get pod -n openshift-ingress -o wide
$ oc get pod -n openshift-ingress -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the running pod is on the
ip-10-0-217-226.ec2.internal
node.View the node status of the running pod:
oc get node <node_name>
$ oc get node <node_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the
<node_name>
that you obtained from the pod list.
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.18.3
NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.18.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because the role list includes
infra
, the pod is running on the correct node.
6.4.2. Moving the default registry 复制链接链接已复制到粘贴板!
You configure the registry Operator to deploy its pods to different nodes.
Prerequisites
- Configure additional machine sets in your OpenShift Container Platform cluster.
Procedure
View the
config/instance
object:oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
$ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
config/instance
object:oc edit configs.imageregistry.operator.openshift.io/cluster
$ oc edit configs.imageregistry.operator.openshift.io/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines of text the
spec
section of the object:nodeSelector: node-role.kubernetes.io/infra: ""
nodeSelector: node-role.kubernetes.io/infra: ""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the registry pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry pod is located:
oc get pods -o wide -n openshift-image-registry
$ oc get pods -o wide -n openshift-image-registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the node has the label you specified:
oc describe node <node_name>
$ oc describe node <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the command output and confirm that
node-role.kubernetes.io/infra
is in theLABELS
list.
6.4.3. Moving the monitoring solution 复制链接链接已复制到粘贴板!
By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom config map.
Procedure
Save the following
ConfigMap
definition as thecluster-monitoring-configmap.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes.
Apply the new config map:
oc create -f cluster-monitoring-configmap.yaml
$ oc create -f cluster-monitoring-configmap.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Watch the monitoring pods move to the new machines:
watch 'oc get pod -n openshift-monitoring -o wide'
$ watch 'oc get pod -n openshift-monitoring -o wide'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a component has not moved to the
infra
node, delete the pod with this component:oc delete pod -n openshift-monitoring <pod>
$ oc delete pod -n openshift-monitoring <pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The component from the deleted pod is re-created on the
infra
node.
Additional resources
- See the monitoring documentation for the general instructions on moving OpenShift Container Platform components.
6.4.4. Moving the cluster logging resources 复制链接链接已复制到粘贴板!
You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
Prerequisites
- Cluster logging and Elasticsearch must be installed. These features are not installed by default.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:oc edit ClusterLogging instance
$ oc edit ClusterLogging instance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that a component has moved, you can use the oc get pod -o wide
command.
For example:
You want to move the Kibana pod from the
ip-10-0-147-79.us-east-2.compute.internal
node:oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You want to move the Kibana Pod to the
ip-10-0-139-48.us-east-2.compute.internal
node, a dedicated infrastructure node:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the node has a
node-role.kubernetes.io/infra: ''
label:oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
$ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To move the Kibana pod, edit the
ClusterLogging
CR to add a node selector:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add a node selector to match the label in the node specification.
After you save the CR, the current Kibana pod is terminated and new pod is deployed:
oc get pods
$ oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The new pod is on the
ip-10-0-139-48.us-east-2.compute.internal
node:oc get pod kibana-7d85dcffc8-bfpfp -o wide
$ oc get pod kibana-7d85dcffc8-bfpfp -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After a few moments, the original Kibana pod is removed.
oc get pods
$ oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow