This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.8.2. Debugging metering
Debugging metering is much easier when you interact directly with the various components. The sections below detail how you can connect and query Presto and Hive as well as view the dashboards of the Presto and HDFS components.
All of the commands in this section assume you have installed metering through OperatorHub in the openshift-metering
namespace.
8.2.1. Get reporting operator logs 复制链接链接已复制到粘贴板!
Use the command below to follow the logs of the reporting-operator
:
oc -n openshift-metering logs -f "$(oc -n openshift-metering get pods -l app=reporting-operator -o name | cut -c 5-)" -c reporting-operator
$ oc -n openshift-metering logs -f "$(oc -n openshift-metering get pods -l app=reporting-operator -o name | cut -c 5-)" -c reporting-operator
8.2.2. Query Presto using presto-cli 复制链接链接已复制到粘贴板!
The following command opens an interactive presto-cli session where you can query Presto. This session runs in the same container as Presto and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Presto pod.
By default, Presto is configured to communicate using TLS. You must use the following command to run Presto queries:
oc -n openshift-metering exec -it "$(oc -n openshift-metering get pods -l app=presto,presto=coordinator -o name | cut -d/ -f2)" \ -- /usr/local/bin/presto-cli --server https://presto:8080 --catalog hive --schema default --user root --keystore-path /opt/presto/tls/keystore.pem
$ oc -n openshift-metering exec -it "$(oc -n openshift-metering get pods -l app=presto,presto=coordinator -o name | cut -d/ -f2)" \
-- /usr/local/bin/presto-cli --server https://presto:8080 --catalog hive --schema default --user root --keystore-path /opt/presto/tls/keystore.pem
Once you run this command, a prompt appears where you can run queries. Use the show tables from metering;
query to view the list of tables:
presto:default> show tables from metering;
$ presto:default> show tables from metering;
Example output
8.2.3. Query Hive using beeline 复制链接链接已复制到粘贴板!
The following opens an interactive beeline session where you can query Hive. This session runs in the same container as Hive and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Hive pod.
oc -n openshift-metering exec -it $(oc -n openshift-metering get pods -l app=hive,hive=server -o name | cut -d/ -f2) \ -c hiveserver2 -- beeline -u 'jdbc:hive2://127.0.0.1:10000/default;auth=noSasl'
$ oc -n openshift-metering exec -it $(oc -n openshift-metering get pods -l app=hive,hive=server -o name | cut -d/ -f2) \
-c hiveserver2 -- beeline -u 'jdbc:hive2://127.0.0.1:10000/default;auth=noSasl'
Once you run this command, a prompt appears where you can run queries. Use the show tables;
query to view the list of tables:
0: jdbc:hive2://127.0.0.1:10000/default> show tables from metering;
$ 0: jdbc:hive2://127.0.0.1:10000/default> show tables from metering;
Example output
8.2.4. Port-forward to the Hive web UI 复制链接链接已复制到粘贴板!
Run the following command to port-forward to the Hive web UI:
oc -n openshift-metering port-forward hive-server-0 10002
$ oc -n openshift-metering port-forward hive-server-0 10002
You can now open http://127.0.0.1:10002 in your browser window to view the Hive web interface.
8.2.5. Port-forward to HDFS 复制链接链接已复制到粘贴板!
Run the following command to port-forward to the HDFS namenode:
oc -n openshift-metering port-forward hdfs-namenode-0 9870
$ oc -n openshift-metering port-forward hdfs-namenode-0 9870
You can now open http://127.0.0.1:9870 in your browser window to view the HDFS web interface.
Run the following command to port-forward to the first HDFS datanode:
oc -n openshift-metering port-forward hdfs-datanode-0 9864
$ oc -n openshift-metering port-forward hdfs-datanode-0 9864
- 1
- To check other datanodes, replace
hdfs-datanode-0
with the pod you want to view information on.
8.2.6. Metering Ansible Operator 复制链接链接已复制到粘贴板!
Metering uses the Ansible Operator to watch and reconcile resources in a cluster environment. When debugging a failed metering installation, it can be helpful to view the Ansible logs or status of your MeteringConfig
custom resource.
8.2.6.1. Accessing Ansible logs 复制链接链接已复制到粘贴板!
In the default installation, the Metering Operator is deployed as a pod. In this case, you can check the logs of the Ansible container within this pod:
oc -n openshift-metering logs $(oc -n openshift-metering get pods -l app=metering-operator -o name | cut -d/ -f2) -c ansible
$ oc -n openshift-metering logs $(oc -n openshift-metering get pods -l app=metering-operator -o name | cut -d/ -f2) -c ansible
Alternatively, you can view the logs of the Operator container (replace -c ansible
with -c operator
) for condensed output.
8.2.6.2. Checking the MeteringConfig Status 复制链接链接已复制到粘贴板!
It can be helpful to view the .status
field of your MeteringConfig
custom resource to debug any recent failures. The following command shows status messages with type Invalid
:
oc -n openshift-metering get meteringconfig operator-metering -o=jsonpath='{.status.conditions[?(@.type=="Invalid")].message}'
$ oc -n openshift-metering get meteringconfig operator-metering -o=jsonpath='{.status.conditions[?(@.type=="Invalid")].message}'
8.2.6.3. Checking MeteringConfig Events 复制链接链接已复制到粘贴板!
Check events that the Metering Operator is generating. This can be helpful during installation or upgrade to debug any resource failures. Sort events by the last timestamp:
oc -n openshift-metering get events --field-selector involvedObject.kind=MeteringConfig --sort-by='.lastTimestamp'
$ oc -n openshift-metering get events --field-selector involvedObject.kind=MeteringConfig --sort-by='.lastTimestamp'
Example output with latest changes in the MeteringConfig resources