This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 10. Moving the cluster logging resources with node selectors
You use node selectors to deploy the Elasticsearch, Kibana, and Curator pods to different nodes.
10.1. Moving the cluster logging resources 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
You should set your MachineSet to use at least 6 replicas.
Prerequisites
- Cluster logging and Elasticsearch must be installed. These features are not installed by default.
Procedure
Edit the Cluster Logging Custom Resource in the
openshift-logging
project:oc edit ClusterLogging instance
$ oc edit ClusterLogging instance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
To verify that a component has moved, you can use the oc get pod -o wide
command.
For example:
You want to move the Kibana pod from the
ip-10-0-147-79.us-east-2.compute.internal
node:oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You want to move the Kibana Pod to the
ip-10-0-139-48.us-east-2.compute.internal
node, a dedicated infrastructure node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the node has a
node-role.kubernetes.io/infra: ''
label:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To move the Kibana Pod, edit the Cluster Logging CR to add a node selector:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you save the CR, the current Kibana pod is terminated and new pod is deployed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The new pod is on the
ip-10-0-139-48.us-east-2.compute.internal
node:oc get pod kibana-7d85dcffc8-bfpfp -o wide
$ oc get pod kibana-7d85dcffc8-bfpfp -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After a few moments, the original Kibana pod is removed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow