5장. 로깅 문제 해결
5.1. 로깅 상태 보기
Red Hat OpenShift Logging Operator 및 기타 로깅 구성 요소의 상태를 볼 수 있습니다.
5.1.1. Red Hat OpenShift Logging Operator의 상태 보기
Red Hat OpenShift Logging Operator의 상태를 볼 수 있습니다.
사전 요구 사항
- Red Hat OpenShift Logging Operator 및 OpenShift Elasticsearch Operator가 설치되어 있습니다.
프로세스
다음 명령을 실행하여
openshift-logging
프로젝트로 변경합니다.oc project openshift-logging
$ oc project openshift-logging
Copy to Clipboard Copied! 다음 명령을 실행하여
ClusterLogging
인스턴스 상태를 가져옵니다.oc get clusterlogging instance -o yaml
$ oc get clusterlogging instance -o yaml
Copy to Clipboard Copied! 출력 예
apiVersion: logging.openshift.io/v1 kind: ClusterLogging # ... status: collection: logs: fluentdStatus: daemonSet: fluentd nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore: elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1
apiVersion: logging.openshift.io/v1 kind: ClusterLogging # ... status:
1 collection: logs: fluentdStatus: daemonSet: fluentd
2 nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore:
3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization:
4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1
Copy to Clipboard Copied!
5.1.1.1. 상태 메시지 예
다음은 ClusterLogging
인스턴스의 Status.Nodes
섹션에 있는 일부 조건 메시지의 예입니다.
다음과 유사한 상태 메시지는 노드가 구성된 낮은 워터마크를 초과했으며 이 노드에 shard가 할당되지 않음을 나타냅니다.
출력 예
nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}
nodes:
- conditions:
- lastTransitionTime: 2019-03-15T15:57:22Z
message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not
be allocated on this node.
reason: Disk Watermark Low
status: "True"
type: NodeStorage
deploymentName: example-elasticsearch-clientdatamaster-0-1
upgradeStatus: {}
다음과 유사한 상태 메시지는 노드가 구성된 높은 워터마크를 초과했으며 shard가 다른 노드로 재배치됨을 나타냅니다.
출력 예
nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}
nodes:
- conditions:
- lastTransitionTime: 2019-03-15T16:04:45Z
message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated
from this node.
reason: Disk Watermark High
status: "True"
type: NodeStorage
deploymentName: cluster-logging-operator
upgradeStatus: {}
다음과 유사한 상태 메시지는 CR의 Elasticsearch 노드 선택기가 클러스터의 노드와 일치하지 않음을 나타냅니다.
출력 예
Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:
Elasticsearch Status:
Shard Allocation Enabled: shard allocation unknown
Cluster:
Active Primary Shards: 0
Active Shards: 0
Initializing Shards: 0
Num Data Nodes: 0
Num Nodes: 0
Pending Tasks: 0
Relocating Shards: 0
Status: cluster health unknown
Unassigned Shards: 0
Cluster Name: elasticsearch
Node Conditions:
elasticsearch-cdm-mkkdys93-1:
Last Transition Time: 2019-06-26T03:37:32Z
Message: 0/5 nodes are available: 5 node(s) didn't match node selector.
Reason: Unschedulable
Status: True
Type: Unschedulable
elasticsearch-cdm-mkkdys93-2:
Node Count: 2
Pods:
Client:
Failed:
Not Ready:
elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
Ready:
Data:
Failed:
Not Ready:
elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
Ready:
Master:
Failed:
Not Ready:
elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
Ready:
다음과 유사한 상태 메시지는 요청한 PVC가 PV에 바인딩할 수 없음을 나타냅니다.
출력 예
Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable
Node Conditions:
elasticsearch-cdm-mkkdys93-1:
Last Transition Time: 2019-06-26T03:37:32Z
Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times)
Reason: Unschedulable
Status: True
Type: Unschedulable
다음과 유사한 상태 메시지는 노드 선택기가 노드와 일치하지 않기 때문에 Fluentd Pod를 예약할 수 없음을 나타냅니다.
출력 예
Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:
Status:
Collection:
Logs:
Fluentd Status:
Daemon Set: fluentd
Nodes:
Pods:
Failed:
Not Ready:
Ready:
5.1.2. 로깅 구성 요소의 상태 보기
여러 로깅 구성 요소의 상태를 볼 수 있습니다.
사전 요구 사항
- Red Hat OpenShift Logging Operator 및 OpenShift Elasticsearch Operator가 설치되어 있습니다.
프로세스
openshift-logging
프로젝트로 변경합니다.oc project openshift-logging
$ oc project openshift-logging
Copy to Clipboard Copied! 로깅 환경의 상태 보기:
oc describe deployment cluster-logging-operator
$ oc describe deployment cluster-logging-operator
Copy to Clipboard Copied! 출력 예
Name: cluster-logging-operator .... Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----
Name: cluster-logging-operator .... Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----
Copy to Clipboard Copied! 로깅 복제본 세트의 상태를 확인합니다.
복제본 세트의 이름을 가져옵니다.
출력 예
oc get replicaset
$ oc get replicaset
Copy to Clipboard Copied! 출력 예
NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m
NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m
Copy to Clipboard Copied! 복제본 세트의 상태를 가져옵니다.
oc describe replicaset cluster-logging-operator-574b8987df
$ oc describe replicaset cluster-logging-operator-574b8987df
Copy to Clipboard Copied! 출력 예
Name: cluster-logging-operator-574b8987df .... Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----
Name: cluster-logging-operator-574b8987df .... Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----
Copy to Clipboard Copied!