Este contenido no está disponible en el idioma seleccionado.
Metering
Configuring and using Metering in OpenShift Container Platform
Abstract
Chapter 1. About Metering Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
1.1. Metering overview Copiar enlaceEnlace copiado en el portapapeles!
Metering is a general purpose data analysis tool that enables you to write reports to process data from different data sources. As a cluster administrator, you can use metering to analyze what is happening in your cluster. You can either write your own, or use predefined SQL queries to define how you want to process data from the different data sources you have available.
Metering focuses primarily on in-cluster metric data using Prometheus as a default data source, enabling users of metering to do reporting on pods, namespaces, and most other Kubernetes resources.
You can install metering on OpenShift Container Platform 4.x clusters and above.
1.1.1. Installing metering Copiar enlaceEnlace copiado en el portapapeles!
You can install metering using the CLI and the web console on OpenShift Container Platform 4.x and above. To learn more, see installing metering.
1.1.2. Upgrading metering Copiar enlaceEnlace copiado en el portapapeles!
You can upgrade metering by updating the Metering Operator subscription. Review the following tasks:
-
The custom resource specifies all the configuration details for your metering installation. When you first install the metering stack, a default
MeteringConfigcustom resource is generated. Use the examples in the documentation to modify this default file.MeteringConfig -
A report custom resource provides a method to manage periodic Extract Transform and Load (ETL) jobs using SQL queries. Reports are composed from other metering resources, such as resources that provide the actual SQL query to run, and
ReportQueryresources that define the data available to theReportDataSourceandReportQueryresources.Report
1.1.3. Using metering Copiar enlaceEnlace copiado en el portapapeles!
You can use metering for writing reports and viewing report results. To learn more, see examples of using metering.
1.1.4. Troubleshooting metering Copiar enlaceEnlace copiado en el portapapeles!
You can use the following sections to troubleshoot specific issues with metering.
- Not enough compute resources
-
resource not configured
StorageClass - Secret not configured correctly
1.1.5. Debugging metering Copiar enlaceEnlace copiado en el portapapeles!
You can use the following sections to debug specific issues with metering.
- Get reporting Operator logs
- Query Presto using presto-cli
- Query Hive using beeline
- Port-forward to the Hive web UI
- Port-forward to HDFS
- Metering Ansible Operator
1.1.6. Uninstalling metering Copiar enlaceEnlace copiado en el portapapeles!
You can remove and clean metering resources from your OpenShift Container Platform cluster. To learn more, see uninstalling metering.
1.1.7. Metering resources Copiar enlaceEnlace copiado en el portapapeles!
Metering has many resources which can be used to manage the deployment and installation of metering, as well as the reporting functionality metering provides.
Metering is managed using the following custom resource definitions (CRDs):
| MeteringConfig | Configures the metering stack for deployment. Contains customizations and configuration options to control each component that makes up the metering stack. |
| Report | Controls what query to use, when, and how often the query should be run, and where to store the results. |
| ReportQuery | Contains the SQL queries used to perform analysis on the data contained within
|
| ReportDataSource | Controls the data available to
|
Chapter 2. Installing metering Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Review the following sections before installing metering into your cluster.
To get started installing metering, first install the Metering Operator from OperatorHub. Next, configure your instance of metering by creating a
MeteringConfig
MeteringConfig
MeteringConfig
2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
Metering requires the following components:
-
A resource for dynamic volume provisioning. Metering supports a number of different storage solutions.
StorageClass - 4GB memory and 4 CPU cores available cluster capacity and at least one node with 2 CPU cores and 2GB memory capacity available.
The minimum resources needed for the largest single pod installed by metering are 2GB of memory and 2 CPU cores.
- Memory and CPU consumption may often be lower, but will spike when running reports, or collecting data for larger clusters.
2.2. Installing the Metering Operator Copiar enlaceEnlace copiado en el portapapeles!
You can install metering by deploying the Metering Operator. The Metering Operator creates and manages the components of the metering stack.
You cannot create a project starting with
openshift-
oc new-project
If the Metering Operator is installed using a namespace other than
openshift-metering
openshift-metering
2.2.1. Installing metering using the web console Copiar enlaceEnlace copiado en el portapapeles!
You can use the OpenShift Container Platform web console to install the Metering Operator.
Procedure
Create a namespace object YAML file for the Metering Operator with the
command. You must use the CLI to create the namespace. For example,oc create -f <file-name>.yaml:metering-namespace.yamlapiVersion: v1 kind: Namespace metadata: name: openshift-metering1 annotations: openshift.io/node-selector: ""2 labels: openshift.io/cluster-monitoring: "true"-
In the OpenShift Container Platform web console, click Operators → OperatorHub. Filter for to find the Metering Operator.
metering - Click the Metering card, review the package description, and then click Install.
- Select an Update Channel, Installation Mode, and Approval Strategy.
- Click Install.
Verify that the Metering Operator is installed by switching to the Operators → Installed Operators page. The Metering Operator has a Status of Succeeded when the installation is complete.
NoteIt might take several minutes for the Metering Operator to appear.
- Click Metering on the Installed Operators page for Operator Details. From the Details page you can create different resources related to metering.
To complete the metering installation, create a
MeteringConfig
2.2.2. Installing metering using the CLI Copiar enlaceEnlace copiado en el portapapeles!
You can use the OpenShift Container Platform CLI to install the Metering Operator.
Procedure
Create a
object YAML file for the Metering Operator. You must use the CLI to create the namespace. For example,Namespace:metering-namespace.yamlapiVersion: v1 kind: Namespace metadata: name: openshift-metering1 annotations: openshift.io/node-selector: ""2 labels: openshift.io/cluster-monitoring: "true"Create the
object:Namespace$ oc create -f <file-name>.yamlFor example:
$ oc create -f openshift-metering.yamlCreate the
object YAML file. For example,OperatorGroup:metering-ogapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-metering1 namespace: openshift-metering2 spec: targetNamespaces: - openshift-meteringCreate a
object YAML file to subscribe a namespace to the Metering Operator. This object targets the most recently released version in theSubscriptioncatalog source. For example,redhat-operators:metering-sub.yamlapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metering-ocp1 namespace: openshift-metering2 spec: channel: "4.8"3 source: "redhat-operators"4 sourceNamespace: "openshift-marketplace" name: "metering-ocp" installPlanApproval: "Automatic"5 - 1
- The name is arbitrary.
- 2
- You must specify the
openshift-meteringnamespace. - 3
- Specify 4.8 as the channel.
- 4
- Specify the
redhat-operatorscatalog source, which contains themetering-ocppackage manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of theCatalogSourceobject you created when you configured the Operator LifeCycle Manager (OLM). - 5
- Specify "Automatic" install plan approval.
2.3. Installing the metering stack Copiar enlaceEnlace copiado en el portapapeles!
After adding the Metering Operator to your cluster you can install the components of metering by installing the metering stack.
2.4. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Review the configuration options
Create a
resource. You can begin the following process to generate a defaultMeteringConfigresource, then use the examples in the documentation to modify this default file for your specific installation. Review the following topics to create yourMeteringConfigresource:MeteringConfig- For configuration options, review About configuring metering.
- At a minimum, you need to configure persistent storage and configure the Hive metastore.
There can only be one
MeteringConfig
openshift-metering
Procedure
-
From the web console, ensure you are on the Operator Details page for the Metering Operator in the project. You can navigate to this page by clicking Operators → Installed Operators, then selecting the Metering Operator.
openshift-metering Under Provided APIs, click Create Instance on the Metering Configuration card. This opens a YAML editor with the default
resource file where you can define your configuration.MeteringConfigNoteFor example configuration files and all supported configuration options, review the configuring metering documentation.
-
Enter your resource into the YAML editor and click Create.
MeteringConfig
The
MeteringConfig
2.5. Verifying the metering installation Copiar enlaceEnlace copiado en el portapapeles!
You can verify the metering installation by performing any of the following checks:
Check the Metering Operator
(CSV) resource for the metering version. This can be done through either the web console or CLI.ClusterServiceVersionProcedure (UI)
-
Navigate to Operators → Installed Operators in the namespace.
openshift-metering - Click Metering Operator.
- Click Subscription for Subscription Details.
- Check the Installed Version.
Procedure (CLI)
Check the Metering Operator CSV in the
namespace:openshift-metering$ oc --namespace openshift-metering get csvExample output
NAME DISPLAY VERSION REPLACES PHASE elasticsearch-operator.4.8.0-202006231303.p0 OpenShift Elasticsearch Operator 4.8.0-202006231303.p0 Succeeded metering-operator.v4.8.0 Metering 4.8.0 Succeeded
-
Navigate to Operators → Installed Operators in the
Check that all required pods in the
namespace are created. This can be done through either the web console or CLI.openshift-meteringNoteMany pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator installation.
Procedure (UI)
- Navigate to Workloads → Pods in the metering namespace and verify that pods are being created. This can take several minutes after installing the metering stack.
Procedure (CLI)
Check that all required pods in the
namespace are created:openshift-metering$ oc -n openshift-metering get podsExample output
NAME READY STATUS RESTARTS AGE hive-metastore-0 2/2 Running 0 3m28s hive-server-0 3/3 Running 0 3m28s metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s presto-coordinator-0 2/2 Running 0 3m9s reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s
Verify that the
resources are beginning to import data, indicated by a valid timestamp in theReportDataSourcecolumn. This might take several minutes. Filter out the "-raw"EARLIEST METRICresources, which do not import data:ReportDataSource$ oc get reportdatasources -n openshift-metering | grep -v rawExample output
NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE node-allocatable-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T18:54:45Z 9m50s node-allocatable-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T18:54:45Z 9m50s node-capacity-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:39Z 9m50s node-capacity-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T18:54:44Z 9m50s persistentvolumeclaim-capacity-bytes 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:43Z 9m50s persistentvolumeclaim-phase 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:28Z 9m50s persistentvolumeclaim-request-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:34Z 9m50s persistentvolumeclaim-usage-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:36Z 9m49s pod-limit-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:26Z 9m49s pod-limit-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:30Z 9m49s pod-persistentvolumeclaim-request-info 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:37Z 9m49s pod-request-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T18:54:24Z 9m49s pod-request-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:32Z 9m49s pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T18:54:10Z 9m49s pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s
After all pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster.
Chapter 3. Upgrading metering Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
You can upgrade metering to 4.8 by updating the Metering Operator subscription.
3.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- The cluster is updated to 4.8.
The Metering Operator is installed from OperatorHub.
NoteYou must upgrade the Metering Operator to 4.8 manually. Metering does not upgrade automatically if you selected the "Automatic" Approval Strategy in a previous installation.
- The MeteringConfig custom resource is configured.
- The metering stack is installed.
- Ensure that metering status is healthy by checking that all pods are ready.
Potential data loss can occur if you modify your metering storage configuration after installing or upgrading metering.
Procedure
- Click Operators → Installed Operators from the web console.
-
Select the project.
openshift-metering - Click Metering Operator.
- Click Subscription → Channel.
In the Change Subscription Update Channel window, select 4.8 and click Save.
NoteWait several seconds to allow the subscription to update before proceeding to the next step.
Click Operators → Installed Operators.
The Metering Operator is shown as 4.8. For example:
Metering 4.8.0-202107012112.p0 provided by Red Hat, Inc
Verification
You can verify the metering upgrade by performing any of the following checks:
Check the Metering Operator cluster service version (CSV) for the new metering version. This can be done through either the web console or CLI.
Procedure (UI)
- Navigate to Operators → Installed Operators in the metering namespace.
- Click Metering Operator.
- Click Subscription for Subscription Details.
- Check the Installed Version for the upgraded metering version. The Starting Version shows the metering version prior to upgrading.
Procedure (CLI)
Check the Metering Operator CSV:
$ oc get csv | grep meteringExample output for metering upgrade from 4.7 to 4.8
NAME DISPLAY VERSION REPLACES PHASE metering-operator.4.8.0-202107012112.p0 Metering 4.8.0-202107012112.p0 metering-operator.4.7.0-202007012112.p0 Succeeded
Check that all required pods in the
namespace are created. This can be done through either the web console or CLI.openshift-meteringNoteMany pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator upgrade.
Procedure (UI)
- Navigate to Workloads → Pods in the metering namespace and verify that pods are being created. This can take several minutes after upgrading the metering stack.
Procedure (CLI)
Check that all required pods in the
namespace are created:openshift-metering$ oc -n openshift-metering get podsExample output
NAME READY STATUS RESTARTS AGE hive-metastore-0 2/2 Running 0 3m28s hive-server-0 3/3 Running 0 3m28s metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s presto-coordinator-0 2/2 Running 0 3m9s reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s
Verify that the
resources are importing new data, indicated by a valid timestamp in theReportDataSourcecolumn. This might take several minutes. Filter out the "-raw"NEWEST METRICresources, which do not import data:ReportDataSource$ oc get reportdatasources -n openshift-metering | grep -v rawTimestamps in the
column indicate thatNEWEST METRICresources are beginning to import new data.ReportDataSourceExample output
NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE node-allocatable-cpu-cores 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:56:44Z 23h node-allocatable-memory-bytes 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:52:07Z 23h node-capacity-cpu-cores 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:56:52Z 23h node-capacity-memory-bytes 2021-07-01T21:10:00Z 2021-07-02T19:57:00Z 2021-07-01T19:10:00Z 2021-07-02T19:57:00Z 2021-07-02T19:57:03Z 23h persistentvolumeclaim-capacity-bytes 2021-07-01T21:09:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:56:46Z 23h persistentvolumeclaim-phase 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:52:36Z 23h persistentvolumeclaim-request-bytes 2021-07-01T21:10:00Z 2021-07-02T19:57:00Z 2021-07-01T19:10:00Z 2021-07-02T19:57:00Z 2021-07-02T19:57:03Z 23h persistentvolumeclaim-usage-bytes 2021-07-01T21:09:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:52:02Z 23h pod-limit-cpu-cores 2021-07-01T21:10:00Z 2021-07-02T19:57:00Z 2021-07-01T19:10:00Z 2021-07-02T19:57:00Z 2021-07-02T19:57:02Z 23h pod-limit-memory-bytes 2021-07-01T21:10:00Z 2021-07-02T19:58:00Z 2021-07-01T19:11:00Z 2021-07-02T19:58:00Z 2021-07-02T19:59:06Z 23h pod-persistentvolumeclaim-request-info 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:52:07Z 23h pod-request-cpu-cores 2021-07-01T21:10:00Z 2021-07-02T19:58:00Z 2021-07-01T19:11:00Z 2021-07-02T19:58:00Z 2021-07-02T19:58:57Z 23h pod-request-memory-bytes 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:55:32Z 23h pod-usage-cpu-cores 2021-07-01T21:09:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:54:55Z 23h pod-usage-memory-bytes 2021-07-01T21:08:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:55:00Z 23h report-ns-pvc-usage 5h36m report-ns-pvc-usage-hourly
After all pods are ready and you have verified that new data is being imported, metering continues to collect data and report on your cluster. Review a previously scheduled report or create a run-once metering report to confirm the metering upgrade.
Chapter 4. Configuring metering Copiar enlaceEnlace copiado en el portapapeles!
4.1. About configuring metering Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
The
MeteringConfig
MeteringConfig
- At a minimum, you need to configure persistent storage and configure the Hive metastore.
- Most default configuration settings work, but larger deployments or highly customized deployments should review all configuration options carefully.
- Some configuration options can not be modified after installation.
For configuration options that can be modified after installation, make the changes in your
MeteringConfig
4.2. Common configuration options Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
4.2.1. Resource requests and limits Copiar enlaceEnlace copiado en el portapapeles!
You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The
default-resource-limits.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
resources:
limits:
cpu: 1
memory: 500Mi
requests:
cpu: 500m
memory: 100Mi
presto:
spec:
coordinator:
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
worker:
replicas: 0
resources:
limits:
cpu: 8
memory: 8Gi
requests:
cpu: 4
memory: 2Gi
hive:
spec:
metastore:
resources:
limits:
cpu: 4
memory: 2Gi
requests:
cpu: 500m
memory: 650Mi
storage:
class: null
create: true
size: 5Gi
server:
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
4.2.2. Node selectors Copiar enlaceEnlace copiado en el portapapeles!
You can run the metering components on specific sets of nodes. Set the
nodeSelector
node-selectors.yaml
Add the
openshift.io/node-selector: ""
""
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
nodeSelector:
"node-role.kubernetes.io/infra": ""
presto:
spec:
coordinator:
nodeSelector:
"node-role.kubernetes.io/infra": ""
worker:
nodeSelector:
"node-role.kubernetes.io/infra": ""
hive:
spec:
metastore:
nodeSelector:
"node-role.kubernetes.io/infra": ""
server:
nodeSelector:
"node-role.kubernetes.io/infra": ""
Add the
openshift.io/node-selector: ""
openshift.io/node-selector
spec.defaultNodeSelector
Scheduler
Verification
You can verify the metering node selectors by performing any of the following checks:
Verify that all pods for metering are correctly scheduled on the IP of the node that is configured in the
custom resource:MeteringConfigCheck all pods in the
namespace:openshift-metering$ oc --namespace openshift-metering get pods -o wideThe output shows the
and correspondingNODEfor each pod running in theIPnamespace.openshift-meteringExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hive-metastore-0 1/2 Running 0 4m33s 10.129.2.26 ip-10-0-210-167.us-east-2.compute.internal <none> <none> hive-server-0 2/3 Running 0 4m21s 10.128.2.26 ip-10-0-150-175.us-east-2.compute.internal <none> <none> metering-operator-964b4fb55-4p699 2/2 Running 0 7h30m 10.131.0.33 ip-10-0-189-6.us-east-2.compute.internal <none> <none> nfs-server 1/1 Running 0 7h30m 10.129.2.24 ip-10-0-210-167.us-east-2.compute.internal <none> <none> presto-coordinator-0 2/2 Running 0 4m8s 10.131.0.35 ip-10-0-189-6.us-east-2.compute.internal <none> <none> reporting-operator-869b854c78-8g2x5 1/2 Running 0 7h27m 10.128.2.25 ip-10-0-150-175.us-east-2.compute.internal <none> <none>Compare the nodes in the
namespace to each nodeopenshift-meteringin your cluster:NAME$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.21.0+6025c28 ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.21.0+6025c28 ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.21.0+6025c28 ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.21.0+6025c28 ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.21.0+6025c28 ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.21.0+6025c28
Verify that the node selector configuration in the
custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand pods are scheduled.MeteringConfigCheck the cluster-wide
object for theSchedulerfield, which shows where pods are scheduled by default:spec.defaultNodeSelector$ oc get schedulers.config.openshift.io cluster -o yaml
4.3. Configuring persistent storage Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Metering requires persistent storage to persist data collected by the Metering Operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.
4.3.1. Storing data in Amazon S3 Copiar enlaceEnlace copiado en el portapapeles!
Metering can use an existing Amazon S3 bucket or create a bucket for storage.
Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data.
Procedure
Edit the
section in thespec.storagefile:s3-storage.yamlExample
s3-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3" s3: bucket: "bucketname/path/"1 region: "us-west-1"2 secretName: "my-aws-secret"3 # Set to false if you want to provide an existing bucket, instead of # having metering create the bucket on your behalf. createBucket: true4 - 1
- Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket.
- 2
- Specify the region of your bucket.
- 3
- The name of a secret in the metering namespace containing the AWS credentials in the
data.aws-access-key-idanddata.aws-secret-access-keyfields. See the exampleSecretobject below for more details. - 4
- Set this field to
falseif you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that haveCreateBucketpermissions.
Use the following
object as a template:SecretExample AWS
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg=="NoteThe values of the
andaws-access-key-idmust be base64 encoded.aws-secret-access-keyCreate the secret:
$ oc create secret -n openshift-metering generic my-aws-secret \ --from-literal=aws-access-key-id=my-access-key \ --from-literal=aws-secret-access-key=my-secret-keyNoteThis command automatically base64 encodes your
andaws-access-key-idvalues.aws-secret-access-key
The
aws-access-key-id
aws-secret-access-key
aws/read-write.json
Example aws/read-write.json file
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
If
spec.storage.hive.s3.createBucket
true
s3-storage.yaml
aws/read-write-create.json
Example aws/read-write-create.json file
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
4.3.2. Storing data in S3-compatible storage Copiar enlaceEnlace copiado en el portapapeles!
You can use S3-compatible storage such as Noobaa.
Procedure
Edit the
section in thespec.storagefile:s3-compatible-storage.yamlExample
s3-compatible-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3Compatible" s3Compatible: bucket: "bucketname"1 endpoint: "http://example:port-number"2 secretName: "my-aws-secret"3 Use the following
object as a template:SecretExample S3-compatible
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg=="
4.3.3. Storing data in Microsoft Azure Copiar enlaceEnlace copiado en el portapapeles!
To store data in Azure blob storage, you must use an existing container.
Procedure
Edit the
section in thespec.storagefile:azure-blob-storage.yamlExample
azure-blob-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "azure" azure: container: "bucket1"1 secretName: "my-azure-secret"2 rootDirectory: "/testDir"3 Use the following
object as a template:SecretExample Azure
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-azure-secret data: azure-storage-account-name: "dGVzdAo=" azure-secret-access-key: "c2VjcmV0Cg=="Create the secret:
$ oc create secret -n openshift-metering generic my-azure-secret \ --from-literal=azure-storage-account-name=my-storage-account-name \ --from-literal=azure-secret-access-key=my-secret-key
4.3.4. Storing data in Google Cloud Storage Copiar enlaceEnlace copiado en el portapapeles!
To store your data in Google Cloud Storage, you must use an existing bucket.
Procedure
Edit the
section in thespec.storagefile:gcs-storage.yamlExample
gcs-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "gcs" gcs: bucket: "metering-gcs/test1"1 secretName: "my-gcs-secret"2 Use the following
object as a template:SecretExample Google Cloud Storage
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-gcs-secret data: gcs-service-account.json: "c2VjcmV0Cg=="Create the secret:
$ oc create secret -n openshift-metering generic my-gcs-secret \ --from-file gcs-service-account.json=/path/to/my/service-account-key.json
4.4. Configuring the Hive metastore Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Hive metastore is responsible for storing all the metadata about the database tables created in Presto and Hive. By default, the metastore stores this information in a local embedded Derby database in a persistent volume attached to the pod.
Generally, the default configuration of the Hive metastore works for small clusters, but users may wish to improve performance or move storage requirements out of cluster by using a dedicated SQL database for storing the Hive metastore data.
4.4.1. Configuring persistent volumes Copiar enlaceEnlace copiado en el portapapeles!
By default, Hive requires one persistent volume to operate.
hive-metastore-db-data
To install, Hive metastore requires that dynamic volume provisioning is enabled in a storage class, a persistent volume of the correct size must be manually pre-created, or you use a pre-existing MySQL or PostgreSQL database.
4.4.1.1. Configuring the storage class for the Hive metastore Copiar enlaceEnlace copiado en el portapapeles!
To configure and specify a storage class for the
hive-metastore-db-data
MeteringConfig
storage
class
metastore-storage.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
hive:
spec:
metastore:
storage:
# Default is null, which means using the default storage class if it exists.
# If you wish to use a different storage class, specify it here
# class: "null"
size: "5Gi"
- 1
- Uncomment this line and replace
nullwith the name of the storage class to use. Leaving the valuenullwill cause metering to use the default storage class for the cluster.
4.4.1.2. Configuring the volume size for the Hive metastore Copiar enlaceEnlace copiado en el portapapeles!
Use the
metastore-storage.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
hive:
spec:
metastore:
storage:
# Default is null, which means using the default storage class if it exists.
# If you wish to use a different storage class, specify it here
# class: "null"
size: "5Gi"
- 1
- Replace the value for
sizewith your desired capacity. The example file shows "5Gi".
4.4.2. Using MySQL or PostgreSQL for the Hive metastore Copiar enlaceEnlace copiado en el portapapeles!
The default installation of metering configures Hive to use an embedded Java database called Derby. This is unsuited for larger environments and can be replaced with either a MySQL or PostgreSQL database. Use the following example configuration files if your deployment requires a MySQL or PostgreSQL database for Hive.
There are three configuration options you can use to control the database that is used by Hive metastore:
url
driver
secretName
Create your MySQL or Postgres instance with a user name and password. Then create a secret by using the OpenShift CLI (
oc
secretName
spec.hive.spec.config.db.secretName
MeteringConfig
Procedure
Create a secret using the OpenShift CLI (
) or by using a YAML file:ocCreate a secret by using the following command:
$ oc --namespace openshift-metering create secret generic <YOUR_SECRETNAME> --from-literal=username=<YOUR_DATABASE_USERNAME> --from-literal=password=<YOUR_DATABASE_PASSWORD>Create a secret by using a YAML file. For example:
apiVersion: v1 kind: Secret metadata: name: <YOUR_SECRETNAME>1 data: username: <BASE64_ENCODED_DATABASE_USERNAME>2 password: <BASE64_ENCODED_DATABASE_PASSWORD>3
Create a configuration file to use a MySQL or PostgreSQL database for Hive:
To use a MySQL database for Hive, use the example configuration file below. Metering supports configuring the internal Hive metastore to use the MySQL server versions 5.6, 5.7, and 8.0.
spec: hive: spec: metastore: storage: create: false config: db: url: "jdbc:mysql://mysql.example.com:3306/hive_metastore"1 driver: "com.mysql.cj.jdbc.Driver" secretName: "REPLACEME"2 NoteWhen configuring Metering to work with older MySQL server versions, such as 5.6 or 5.7, you might need to add the
enabledTLSProtocolsJDBC URL parameter when configuring the internal Hive metastore.You can pass additional JDBC parameters using the
. For more details, see the MySQL Connector/J 8.0 documentation.spec.hive.config.urlTo use a PostgreSQL database for Hive, use the example configuration file below:
spec: hive: spec: metastore: storage: create: false config: db: url: "jdbc:postgresql://postgresql.example.com:5432/hive_metastore" driver: "org.postgresql.Driver" username: "REPLACEME" password: "REPLACEME"You can pass additional JDBC parameters using the
. For more details, see the PostgreSQL JDBC driver documentation.spec.hive.config.url
4.5. Configuring the Reporting Operator Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
The Reporting Operator is responsible for collecting data from Prometheus, storing the metrics in Presto, running report queries against Presto, and exposing their results via an HTTP API. Configuring the Reporting Operator is primarily done in your
MeteringConfig
4.5.1. Securing a Prometheus connection Copiar enlaceEnlace copiado en el portapapeles!
When you install metering on OpenShift Container Platform, Prometheus is available at https://prometheus-k8s.openshift-monitoring.svc:9091/.
To secure the connection to Prometheus, the default metering installation uses the OpenShift Container Platform certificate authority (CA). If your Prometheus instance uses a different CA, you can inject the CA through a config map. You can also configure the Reporting Operator to use a specified bearer token to authenticate with Prometheus.
Procedure
Inject the CA that your Prometheus instance uses through a config map. For example:
spec: reporting-operator: spec: config: prometheus: certificateAuthority: useServiceAccountCA: false configMap: enabled: true create: true name: reporting-operator-certificate-authority-config filename: "internal-ca.crt" value: | -----BEGIN CERTIFICATE----- (snip) -----END CERTIFICATE-----Alternatively, to use the system certificate authorities for publicly valid certificates, set both
anduseServiceAccountCAtoconfigMap.enabled.false- Specify a bearer token to authenticate with Prometheus. For example:
spec:
reporting-operator:
spec:
config:
prometheus:
metricsImporter:
auth:
useServiceAccountToken: false
tokenSecret:
enabled: true
create: true
value: "abc-123"
4.5.2. Exposing the reporting API Copiar enlaceEnlace copiado en el portapapeles!
On OpenShift Container Platform the default metering installation automatically exposes a route, making the reporting API available. This provides the following features:
- Automatic DNS
- Automatic TLS based on the cluster CA
Also, the default installation makes it possible to use the OpenShift Container Platform service for serving certificates to protect the reporting API with TLS. The OpenShift Container Platform OAuth proxy is deployed as a sidecar container for the Reporting Operator, which protects the reporting API with authentication.
4.5.2.1. Using OpenShift Container Platform Authentication Copiar enlaceEnlace copiado en el portapapeles!
By default, the reporting API is secured with TLS and authentication. This is done by configuring the Reporting Operator to deploy a pod containing both the Reporting Operator’s container, and a sidecar container running OpenShift Container Platform auth-proxy.
To access the reporting API, the Metering Operator exposes a route. After that route has been installed, you can run the following command to get the route’s hostname.
$ METERING_ROUTE_HOSTNAME=$(oc -n openshift-metering get routes metering -o json | jq -r '.status.ingress[].host')
Next, set up authentication using either a service account token or basic authentication with a username and password.
4.5.2.1.1. Authenticate using a service account token Copiar enlaceEnlace copiado en el portapapeles!
With this method, you use the token in the Reporting Operator’s service account, and pass that bearer token to the Authorization header in the following command:
$ TOKEN=$(oc -n openshift-metering serviceaccounts get-token reporting-operator)
curl -H "Authorization: Bearer $TOKEN" -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]"
Be sure to replace the
name=[Report Name]
format=[Format]
format
4.5.2.1.2. Authenticate using a username and password Copiar enlaceEnlace copiado en el portapapeles!
Metering supports configuring basic authentication using a username and password combination, which is specified in the contents of an htpasswd file. By default, a secret containing empty htpasswd data is created. You can, however, configure the
reporting-operator.spec.authProxy.htpasswd.data
reporting-operator.spec.authProxy.htpasswd.createSecret
Once you have specified the above in your
MeteringConfig
$ curl -u testuser:password123 -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]"
Be sure to replace
testuser:password123
4.5.2.2. Manually Configuring Authentication Copiar enlaceEnlace copiado en el portapapeles!
To manually configure, or disable OAuth in the Reporting Operator, you must set
spec.tls.enabled: false
MeteringConfig
This also disables all TLS and authentication between the Reporting Operator, Presto, and Hive. You would need to manually configure these resources yourself.
Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the OpenShift Container Platform auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API isn’t exposed directly, but instead is proxied to via the auth-proxy sidecar container.
-
reporting-operator.spec.authProxy.enabled -
reporting-operator.spec.authProxy.cookie.createSecret -
reporting-operator.spec.authProxy.cookie.seed
You need to set
reporting-operator.spec.authProxy.enabled
reporting-operator.spec.authProxy.cookie.createSecret
true
reporting-operator.spec.authProxy.cookie.seed
You can generate a 32-character random string using the following command.
$ openssl rand -base64 32 | head -c32; echo.
4.5.2.2.1. Token authentication Copiar enlaceEnlace copiado en el portapapeles!
When the following options are set to
true
-
reporting-operator.spec.authProxy.subjectAccessReview.enabled -
reporting-operator.spec.authProxy.delegateURLs.enabled
When authentication is enabled, the Bearer token used to query the reporting API of the user or service account must be granted access using one of the following roles:
- report-exporter
- reporting-admin
- reporting-viewer
- metering-admin
- metering-viewer
The Metering Operator is capable of creating role bindings for you, granting these permissions by specifying a list of subjects in the
spec.permissions
advanced-auth.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
permissions:
# anyone in the "metering-admins" group can create, update, delete, etc any
# metering.openshift.io resources in the namespace.
# This also grants permissions to get query report results from the reporting REST API.
meteringAdmins:
- kind: Group
name: metering-admins
# Same as above except read only access and for the metering-viewers group.
meteringViewers:
- kind: Group
name: metering-viewers
# the default serviceaccount in the namespace "my-custom-ns" can:
# create, update, delete, etc reports.
# This also gives permissions query the results from the reporting REST API.
reportingAdmins:
- kind: ServiceAccount
name: default
namespace: my-custom-ns
# anyone in the group reporting-readers can get, list, watch reports, and
# query report results from the reporting REST API.
reportingViewers:
- kind: Group
name: reporting-readers
# anyone in the group cluster-admins can query report results
# from the reporting REST API. So can the user bob-from-accounting.
reportExporters:
- kind: Group
name: cluster-admins
- kind: User
name: bob-from-accounting
reporting-operator:
spec:
authProxy:
# htpasswd.data can contain htpasswd file contents for allowing auth
# using a static list of usernames and their password hashes.
#
# username is 'testuser' password is 'password123'
# generated htpasswdData using: `htpasswd -nb -s testuser password123`
# htpasswd:
# data: |
# testuser:{SHA}y/2sYAj5yrQIN4TL0YdPdmGNKpc=
#
# change REPLACEME to the output of your htpasswd command
htpasswd:
data: |
REPLACEME
Alternatively, you can use any role which has rules granting
get
reports/export
get
export
Report
admin
cluster-admin
By default, the Reporting Operator and Metering Operator service accounts both have these permissions, and their tokens can be used for authentication.
4.5.2.2.2. Basic authentication with a username and password Copiar enlaceEnlace copiado en el portapapeles!
For basic authentication you can supply a username and password in the
reporting-operator.spec.authProxy.htpasswd.data
htpasswdData
4.6. Configure AWS billing correlation Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Metering can correlate cluster usage information with AWS detailed billing information, attaching a dollar amount to resource usage. For clusters running in EC2, you can enable this by modifying the example
aws-billing.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
openshift-reporting:
spec:
awsBillingReportDataSource:
enabled: true
# Replace these with where your AWS billing reports are
# stored in S3.
bucket: "<your-aws-cost-report-bucket>"
prefix: "<path/to/report>"
region: "<your-buckets-region>"
reporting-operator:
spec:
config:
aws:
secretName: "<your-aws-secret>"
presto:
spec:
config:
aws:
secretName: "<your-aws-secret>"
hive:
spec:
config:
aws:
secretName: "<your-aws-secret>"
To enable AWS billing correlation, first ensure the AWS Cost and Usage Reports are enabled. For more information, see Turning on the AWS Cost and Usage Report in the AWS documentation.
- 1
- Update the bucket, prefix, and region to the location of your AWS Detailed billing report.
- 2 3 4
- All
secretNamefields should be set to the name of a secret in the metering namespace containing AWS credentials in thedata.aws-access-key-idanddata.aws-secret-access-keyfields. See the example secret file below for more details.
apiVersion: v1
kind: Secret
metadata:
name: <your-aws-secret>
data:
aws-access-key-id: "dGVzdAo="
aws-secret-access-key: "c2VjcmV0Cg=="
To store data in S3, the
aws-access-key-id
aws-secret-access-key
aws/read-write.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
This can be done either pre-installation or post-installation. Disabling it post-installation can cause errors in the Reporting Operator.
Chapter 5. Reports Copiar enlaceEnlace copiado en el portapapeles!
5.1. About Reports Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
A
Report
ReportQuery
ReportDataSource
ReportQuery
Report
Many use cases are addressed by the predefined
ReportQuery
ReportDataSource
5.1.1. Reports Copiar enlaceEnlace copiado en el portapapeles!
The
Report
Report
Reports with a
spec.schedule
reportingStart
reportingEnd
ReportDataSource
5.1.1.1. Example report with a schedule Copiar enlaceEnlace copiado en el portapapeles!
The following example
Report
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
reportingStart: "2021-07-01T00:00:00Z"
schedule:
period: "hourly"
hourly:
minute: 0
second: 0
5.1.1.2. Example report without a schedule (run-once) Copiar enlaceEnlace copiado en el portapapeles!
The following example
Report
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
reportingStart: "2021-07-01T00:00:00Z"
reportingEnd: "2021-07-31T00:00:00Z"
5.1.1.3. query Copiar enlaceEnlace copiado en el portapapeles!
The
query
ReportQuery
query is a required field.
Use the following command to list available
ReportQuery
$ oc -n openshift-metering get reportqueries
Example output
NAME AGE
cluster-cpu-capacity 23m
cluster-cpu-capacity-raw 23m
cluster-cpu-usage 23m
cluster-cpu-usage-raw 23m
cluster-cpu-utilization 23m
cluster-memory-capacity 23m
cluster-memory-capacity-raw 23m
cluster-memory-usage 23m
cluster-memory-usage-raw 23m
cluster-memory-utilization 23m
cluster-persistentvolumeclaim-request 23m
namespace-cpu-request 23m
namespace-cpu-usage 23m
namespace-cpu-utilization 23m
namespace-memory-request 23m
namespace-memory-usage 23m
namespace-memory-utilization 23m
namespace-persistentvolumeclaim-request 23m
namespace-persistentvolumeclaim-usage 23m
node-cpu-allocatable 23m
node-cpu-allocatable-raw 23m
node-cpu-capacity 23m
node-cpu-capacity-raw 23m
node-cpu-utilization 23m
node-memory-allocatable 23m
node-memory-allocatable-raw 23m
node-memory-capacity 23m
node-memory-capacity-raw 23m
node-memory-utilization 23m
persistentvolumeclaim-capacity 23m
persistentvolumeclaim-capacity-raw 23m
persistentvolumeclaim-phase-raw 23m
persistentvolumeclaim-request 23m
persistentvolumeclaim-request-raw 23m
persistentvolumeclaim-usage 23m
persistentvolumeclaim-usage-raw 23m
persistentvolumeclaim-usage-with-phase-raw 23m
pod-cpu-request 23m
pod-cpu-request-raw 23m
pod-cpu-usage 23m
pod-cpu-usage-raw 23m
pod-memory-request 23m
pod-memory-request-raw 23m
pod-memory-usage 23m
pod-memory-usage-raw 23m
Report queries with the
-raw
ReportQuery
namespace-
pod-
namespace-
node-
aws-
-aws
The
aws-ec2-billing-data
aws-ec2-cluster-cost
Use the following command to get the
ReportQuery
spec.columns
$ oc -n openshift-metering get reportqueries namespace-memory-request -o yaml
Example output
apiVersion: metering.openshift.io/v1
kind: ReportQuery
metadata:
name: namespace-memory-request
labels:
operator-metering: "true"
spec:
columns:
- name: period_start
type: timestamp
unit: date
- name: period_end
type: timestamp
unit: date
- name: namespace
type: varchar
unit: kubernetes_namespace
- name: pod_request_memory_byte_seconds
type: double
unit: byte_seconds
5.1.1.4. schedule Copiar enlaceEnlace copiado en el portapapeles!
The
spec.schedule
schedule
period
period
hourly
daily
weekly
monthly
For example, if
period
weekly
weekly
spec.schedule
...
schedule:
period: "weekly"
weekly:
dayOfWeek: "wednesday"
hour: 13
...
5.1.1.4.1. period Copiar enlaceEnlace copiado en el portapapeles!
Valid values of
schedule.period
hourly-
minute -
second
-
daily-
hour -
minute -
second
-
weekly-
dayOfWeek -
hour -
minute -
second
-
monthly-
dayOfMonth -
hour -
minute -
second
-
cron-
expression
-
Generally, the
hour
minute
second
dayOfWeek
dayOfMonth
For each of these fields, there is a range of valid values:
-
is an integer value between 0-23.
hour -
is an integer value between 0-59.
minute -
is an integer value between 0-59.
second -
is a string value that expects the day of the week (spelled out).
dayOfWeek -
is an integer value between 1-31.
dayOfMonth
For cron periods, normal cron expressions are valid:
-
expression: "*/5 * * * *"
5.1.1.5. reportingStart Copiar enlaceEnlace copiado en el portapapeles!
To support running a report against existing data, you can set the
spec.reportingStart
schedule
reportingStart
Setting the
spec.reportingStart
reportingStart
reportingStart
reportingStart
reportingPeriod
As an example of how to use this field, if you had data already collected dating back to January 1st, 2019 that you want to include in your
Report
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
schedule:
period: "hourly"
reportingStart: "2021-01-01T00:00:00Z"
5.1.1.6. reportingEnd Copiar enlaceEnlace copiado en el portapapeles!
To configure a report to only run until a specified time, you can set the
spec.reportingEnd
reportingEnd
Because a schedule will most likely not align with the
reportingEnd
reportingEnd
reportingEnd
For example, if you want to create a report that runs once a week for the month of July:
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
schedule:
period: "weekly"
reportingStart: "2021-07-01T00:00:00Z"
reportingEnd: "2021-07-31T00:00:00Z"
5.1.1.7. expiration Copiar enlaceEnlace copiado en el portapapeles!
Add the
expiration
expiration
Report
creationDate
expiration
Setting the
expiration
report-operator
For example, the following scheduled report is deleted 30 minutes after the
creationDate
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
schedule:
period: "weekly"
reportingStart: "2021-07-01T00:00:00Z"
expiration: "30m"
- 1
- Valid time units for the
expirationduration arens,us(orµs),ms,s,m, andh.
The
expiration
Report
5.1.1.8. runImmediately Copiar enlaceEnlace copiado en el portapapeles!
When
runImmediately
true
When
runImmediately
true
reportingEnd
reportingStart
5.1.1.9. inputs Copiar enlaceEnlace copiado en el portapapeles!
The
spec.inputs
Report
ReportQuery
spec.inputs
spec.inputs
spec:
inputs:
- name: "NamespaceCPUUsageReportName"
value: "namespace-cpu-usage-hourly"
5.1.1.10. Roll-up reports Copiar enlaceEnlace copiado en el portapapeles!
Report data is stored in the database much like metrics themselves, and therefore, can be used in aggregated or roll-up reports. A simple use case for a roll-up report is to spread the time required to produce a report over a longer period of time. This is instead of requiring a monthly report to query and add all data over an entire month. For example, the task can be split into daily reports that each run over 1/30 of the data.
A custom roll-up report requires a custom report query. The
ReportQuery
reportTableName
Report
metadata.name
Below is a snippet taken from a built-in query:
pod-cpu.yaml
spec:
...
inputs:
- name: ReportingStart
type: time
- name: ReportingEnd
type: time
- name: NamespaceCPUUsageReportName
type: Report
- name: PodCpuUsageRawDataSourceName
type: ReportDataSource
default: pod-cpu-usage-raw
...
query: |
...
{|- if .Report.Inputs.NamespaceCPUUsageReportName |}
namespace,
sum(pod_usage_cpu_core_seconds) as pod_usage_cpu_core_seconds
FROM {| .Report.Inputs.NamespaceCPUUsageReportName | reportTableName |}
...
Example aggregated-report.yaml roll-up report
spec:
query: "namespace-cpu-usage"
inputs:
- name: "NamespaceCPUUsageReportName"
value: "namespace-cpu-usage-hourly"
5.1.1.10.1. Report status Copiar enlaceEnlace copiado en el portapapeles!
The execution of a scheduled report can be tracked using its status field. Any errors occurring during the preparation of a report will be recorded here.
The
status
Report
-
: Conditions is a list of conditions, each of which have a
conditions,type,status, andreasonfield. Possible values of a condition’smessagefield aretypeandRunning, indicating the current state of the scheduled report. TheFailureindicates why itsreasonis in its current state with theconditionbeing eitherstatus,trueor,false. Theunknownprovides a human readable indicating why the condition is in the current state. For detailed information on themessagevalues, seereasonpkg/apis/metering/v1/util/report_util.go. -
: Indicates the time metering has collected data up to.
lastReportTime
5.2. Storage locations Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
A
StorageLocation
Report
You only need to configure a
StorageLocation
5.2.1. Storage location examples Copiar enlaceEnlace copiado en el portapapeles!
The following example shows the built-in local storage option, and is configured to use Hive. By default, data is stored wherever Hive is configured to use storage, such as HDFS, S3, or a
ReadWriteMany
Local storage example
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: hive
labels:
operator-metering: "true"
spec:
hive:
databaseName: metering
unmanagedDatabase: false
- 1
- If the
hivesection is present, then theStorageLocationresource will be configured to store data in Presto by creating the table using the Hive server. OnlydatabaseNameandunmanagedDatabaseare required fields. - 2
- The name of the database within hive.
- 3
- If
true, theStorageLocationresource will not be actively managed, and thedatabaseNameis expected to already exist in Hive. Iffalse, the Reporting Operator will create the database in Hive.
The following example uses an AWS S3 bucket for storage. The prefix is appended to the bucket name when constructing the path to use.
Remote storage example
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: example-s3-storage
labels:
operator-metering: "true"
spec:
hive:
databaseName: example_s3_storage
unmanagedDatabase: false
location: "s3a://bucket-name/path/within/bucket"
- 1
- Optional: The filesystem URL for Presto and Hive to use for the database. This can be an
hdfs://ors3a://filesystem URL.
There are additional optional fields that can be specified in the
hive
-
: Contains configuration options for creating tables using Hive.
defaultTableProperties -
: The file format used for storing files in the filesystem. See the Hive Documentation on File Storage Format for a list of options and more details.
fileFormat -
: Controls the Hive row format. This controls how Hive serializes and deserializes rows. See the Hive Documentation on Row Formats and SerDe for more details.
rowFormat
5.2.2. Default storage location Copiar enlaceEnlace copiado en el portapapeles!
If an annotation
storagelocation.metering.openshift.io/is-default
true
StorageLocation
Default storage example
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: example-s3-storage
labels:
operator-metering: "true"
annotations:
storagelocation.metering.openshift.io/is-default: "true"
spec:
hive:
databaseName: example_s3_storage
unmanagedDatabase: false
location: "s3a://bucket-name/path/within/bucket"
Chapter 6. Using Metering Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
6.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Install Metering
- Review the details about the available options that can be configured for a report and how they function.
6.2. Writing Reports Copiar enlaceEnlace copiado en el portapapeles!
Writing a report is the way to process and analyze data using metering.
To write a report, you must define a
Report
openshift-metering
Prerequisites
- Metering is installed.
Procedure
Change to the
project:openshift-metering$ oc project openshift-meteringCreate a
resource as a YAML file:ReportCreate a YAML file with the following content:
apiVersion: metering.openshift.io/v1 kind: Report metadata: name: namespace-cpu-request-20201 namespace: openshift-metering spec: reportingStart: '2020-01-01T00:00:00Z' reportingEnd: '2020-12-30T23:59:59Z' query: namespace-cpu-request2 runImmediately: true3 - 2
- The
queryspecifies theReportQueryresources used to generate the report. Change this based on what you want to report on. For a list of options, runoc get reportqueries | grep -v raw. - 1
- Use a descriptive name about what the report does for
metadata.name. A good name describes the query, and the schedule or period you used. - 3
- Set
runImmediatelytotruefor it to run with whatever data is available, or set it tofalseif you want it to wait forreportingEndto pass.
Run the following command to create the
resource:Report$ oc create -f <file-name>.yamlExample output
report.metering.openshift.io/namespace-cpu-request-2020 created
You can list reports and their
status with the following command:Running$ oc get reportsExample output
NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE namespace-cpu-request-2020 namespace-cpu-request Finished 2020-12-30T23:59:59Z 26s
6.3. Viewing report results Copiar enlaceEnlace copiado en el portapapeles!
Viewing a report’s results involves querying the reporting API route and authenticating to the API using your OpenShift Container Platform credentials. Reports can be retrieved as
JSON
CSV
Tabular
Prerequisites
- Metering is installed.
-
To access report results, you must either be a cluster administrator, or you need to be granted access using the role in the
report-exporternamespace.openshift-metering
Procedure
Change to the
project:openshift-metering$ oc project openshift-meteringQuery the reporting API for results:
Create a variable for the metering
route then get the route:reporting-api$ meteringRoute="$(oc get routes metering -o jsonpath='{.spec.host}')"$ echo "$meteringRoute"Get the token of your current user to be used in the request:
$ token="$(oc whoami -t)"Set
to the name of the report you created:reportName$ reportName=namespace-cpu-request-2020Set
to one ofreportFormat,csv, orjsonto specify the output format of the API response:tabular$ reportFormat=csvTo get the results, use
to make a request to the reporting API for your report:curl$ curl --insecure -H "Authorization: Bearer ${token}" "https://${meteringRoute}/api/v1/reports/get?name=${reportName}&namespace=openshift-metering&format=$reportFormat"Example output with
reportName=namespace-cpu-request-2020andreportFormat=csvperiod_start,period_end,namespace,pod_request_cpu_core_seconds 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-apiserver,11745.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-apiserver-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-authentication,522.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-authentication-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cloud-credential-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-machine-approver,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-node-tuning-operator,3385.800000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-samples-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-version,522.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-console,522.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-console-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-controller-manager,7830.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-controller-manager-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-dns,34372.800000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-dns-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-etcd,23490.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-image-registry,5993.400000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ingress,5220.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ingress-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver,12528.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager,8613.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-machine-api,1305.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-machine-config-operator,9637.800000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-metering,19575.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-monitoring,6256.800000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-network-operator,261.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-sdn,94503.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-service-ca,783.000000 2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-service-ca-operator,261.000000
Chapter 7. Examples of using metering Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Use the following example reports to get started measuring capacity, usage, and utilization in your cluster. These examples showcase the various types of reports metering offers, along with a selection of the predefined queries.
7.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Install metering
- Review the details about writing and viewing reports.
7.2. Measure cluster capacity hourly and daily Copiar enlaceEnlace copiado en el portapapeles!
The following report demonstrates how to measure cluster capacity both hourly and daily. The daily report works by aggregating the hourly report’s results.
The following report measures cluster CPU capacity every hour.
Hourly CPU capacity by cluster example
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: cluster-cpu-capacity-hourly
spec:
query: "cluster-cpu-capacity"
schedule:
period: "hourly"
- 1
- You could change this period to
dailyto get a daily report, but with larger data sets it is more efficient to use an hourly report, then aggregate your hourly data into a daily report.
The following report aggregates the hourly data into a daily report.
Daily CPU capacity by cluster example
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: cluster-cpu-capacity-daily
spec:
query: "cluster-cpu-capacity"
inputs:
- name: ClusterCpuCapacityReportName
value: cluster-cpu-capacity-hourly
schedule:
period: "daily"
- 1
- To stay organized, remember to change the
nameof your report if you change any of the other values. - 2
- You can also measure
cluster-memory-capacity. Remember to update the query in the associated hourly report as well. - 3
- The
inputssection configures this report to aggregate the hourly report. Specifically,value: cluster-cpu-capacity-hourlyis the name of the hourly report that gets aggregated.
7.3. Measure cluster usage with a one-time report Copiar enlaceEnlace copiado en el portapapeles!
The following report measures cluster usage from a specific starting date forward. The report only runs once, after you save it and apply it.
CPU usage by cluster example
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: cluster-cpu-usage-2020
spec:
reportingStart: '2020-01-01T00:00:00Z'
reportingEnd: '2020-12-30T23:59:59Z'
query: cluster-cpu-usage
runImmediately: true
- 1
- To stay organized, remember to change the
nameof your report if you change any of the other values. - 2
- Configures the report to start using data from the
reportingStarttimestamp until thereportingEndtimestamp. - 3
- Adjust your query here. You can also measure cluster usage with the
cluster-memory-usagequery. - 4
- Configures the report to run immediately after saving it and applying it.
7.4. Measure cluster utilization using cron expressions Copiar enlaceEnlace copiado en el portapapeles!
You can also use cron expressions when configuring the period of your reports. The following report measures cluster utilization by looking at CPU utilization from 9am-5pm every weekday.
Weekday CPU utilization by cluster example
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: cluster-cpu-utilization-weekdays
spec:
query: "cluster-cpu-utilization"
schedule:
period: "cron"
expression: 0 0 * * 1-5
Chapter 8. Troubleshooting and debugging metering Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Use the following sections to help troubleshoot and debug specific issues with metering.
In addition to the information in this section, be sure to review the following topics:
8.1. Troubleshooting metering Copiar enlaceEnlace copiado en el portapapeles!
A common issue with metering is pods failing to start. Pods might fail to start due to lack of resources or if they have a dependency on a resource that does not exist, such as a
StorageClass
Secret
8.1.1. Not enough compute resources Copiar enlaceEnlace copiado en el portapapeles!
A common issue when installing or running metering is a lack of compute resources. As the cluster grows and more reports are created, the Reporting Operator pod requires more memory. If memory usage reaches the pod limit, the cluster considers the pod out of memory (OOM) and terminates it with an
OOMKilled
The Metering Operator does not autoscale the Reporting Operator based on the load in the cluster. Therefore, CPU usage for the Reporting Operator pod does not increase as the cluster grows.
To determine if the issue is with resources or scheduling, follow the troubleshooting instructions included in the Kubernetes document Managing Compute Resources for Containers.
To troubleshoot issues due to a lack of compute resources, check the following within the
openshift-metering
Prerequisites
You are currently in the
namespace. Change to theopenshift-meteringnamespace by running:openshift-metering$ oc project openshift-metering
Procedure
Check for metering
resources that fail to complete and show the status ofReport:ReportingPeriodUnmetDependencies$ oc get reportsExample output
NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE namespace-cpu-utilization-adhoc-10 namespace-cpu-utilization Finished 2020-10-31T00:00:00Z 2m38s namespace-cpu-utilization-adhoc-11 namespace-cpu-utilization ReportingPeriodUnmetDependencies 2m23s namespace-memory-utilization-202010 namespace-memory-utilization ReportingPeriodUnmetDependencies 26s namespace-memory-utilization-202011 namespace-memory-utilization ReportingPeriodUnmetDependencies 14sCheck the
resources where theReportDataSourceis less than the report end date:NEWEST METRIC$ oc get reportdatasourceExample output
NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE ... node-allocatable-cpu-cores 2020-04-23T09:14:00Z 2020-08-31T10:07:00Z 2020-04-23T09:14:00Z 2020-10-15T17:13:00Z 2020-12-09T12:45:10Z 230d node-allocatable-memory-bytes 2020-04-23T09:14:00Z 2020-08-30T05:19:00Z 2020-04-23T09:14:00Z 2020-10-14T08:01:00Z 2020-12-09T12:45:12Z 230d ... pod-usage-memory-bytes 2020-04-23T09:14:00Z 2020-08-24T20:25:00Z 2020-04-23T09:14:00Z 2020-10-09T23:31:00Z 2020-12-09T12:45:12Z 230dCheck the health of the
reporting-operatorresource for a high number of pod restarts:Pod$ oc get pods -l app=reporting-operatorExample output
NAME READY STATUS RESTARTS AGE reporting-operator-84f7c9b7b6-fr697 2/2 Running 542 8d1 - 1
- The Reporting Operator pod is restarting at a high rate.
Check the
reporting-operatorresource for anPodtermination:OOMKilled$ oc describe pod/reporting-operator-84f7c9b7b6-fr697Example output
Name: reporting-operator-84f7c9b7b6-fr697 Namespace: openshift-metering Priority: 0 Node: ip-10-xx-xx-xx.ap-southeast-1.compute.internal/10.xx.xx.xx ... Ports: 8080/TCP, 6060/TCP, 8082/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP State: Running Started: Thu, 03 Dec 2020 20:59:45 +1000 Last State: Terminated Reason: OOMKilled1 Exit Code: 137 Started: Thu, 03 Dec 2020 20:38:05 +1000 Finished: Thu, 03 Dec 2020 20:59:43 +1000- 1
- The Reporting Operator pod was terminated due to OOM kill.
Increasing the reporting-operator pod memory limit
If you are experiencing an increase in pod restarts and OOM kill events, you can check the current memory limit set for the Reporting Operator pod. Increasing the memory limit allows the Reporting Operator pod to update the report data sources. If necessary, increase the memory limit in your
MeteringConfig
Procedure
Check the current memory limits of the
reporting-operatorresource:Pod$ oc describe pod reporting-operator-67d6f57c56-79mrtExample output
Name: reporting-operator-67d6f57c56-79mrt Namespace: openshift-metering Priority: 0 ... Ports: 8080/TCP, 6060/TCP, 8082/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP State: Running Started: Tue, 08 Dec 2020 14:26:21 +1000 Ready: True Restart Count: 0 Limits: cpu: 1 memory: 500Mi1 Requests: cpu: 500m memory: 250Mi Environment: ...- 1
- The current memory limit for the Reporting Operator pod.
Edit the
resource to update the memory limit:MeteringConfig$ oc edit meteringconfig/operator-meteringExample
MeteringConfigresourcekind: MeteringConfig metadata: name: operator-metering namespace: openshift-metering spec: reporting-operator: spec: resources:1 limits: cpu: 1 memory: 750Mi requests: cpu: 500m memory: 500Mi ...- 1
- Add or increase memory limits within the
resourcesfield of theMeteringConfigresource.
NoteIf there continue to be numerous OOM killed events after memory limits are increased, this might indicate that a different issue is causing the reports to be in a pending state.
8.1.2. StorageClass resource not configured Copiar enlaceEnlace copiado en el portapapeles!
Metering requires that a default
StorageClass
See the documentation on configuring metering for information on how to check if there are any
StorageClass
8.1.3. Secret not configured correctly Copiar enlaceEnlace copiado en el portapapeles!
A common issue with metering is providing the incorrect secret when configuring your persistent storage. Be sure to review the example configuration files and create you secret according to the guidelines for your storage provider.
8.2. Debugging metering Copiar enlaceEnlace copiado en el portapapeles!
Debugging metering is much easier when you interact directly with the various components. The sections below detail how you can connect and query Presto and Hive as well as view the dashboards of the Presto and HDFS components.
All of the commands in this section assume you have installed metering through OperatorHub in the
openshift-metering
8.2.1. Get reporting operator logs Copiar enlaceEnlace copiado en el portapapeles!
Use the command below to follow the logs of the
reporting-operator
$ oc -n openshift-metering logs -f "$(oc -n openshift-metering get pods -l app=reporting-operator -o name | cut -c 5-)" -c reporting-operator
8.2.2. Query Presto using presto-cli Copiar enlaceEnlace copiado en el portapapeles!
The following command opens an interactive presto-cli session where you can query Presto. This session runs in the same container as Presto and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Presto pod.
By default, Presto is configured to communicate using TLS. You must use the following command to run Presto queries:
$ oc -n openshift-metering exec -it "$(oc -n openshift-metering get pods -l app=presto,presto=coordinator -o name | cut -d/ -f2)" \
-- /usr/local/bin/presto-cli --server https://presto:8080 --catalog hive --schema default --user root --keystore-path /opt/presto/tls/keystore.pem
Once you run this command, a prompt appears where you can run queries. Use the
show tables from metering;
$ presto:default> show tables from metering;
Example output
Table
datasource_your_namespace_cluster_cpu_capacity_raw
datasource_your_namespace_cluster_cpu_usage_raw
datasource_your_namespace_cluster_memory_capacity_raw
datasource_your_namespace_cluster_memory_usage_raw
datasource_your_namespace_node_allocatable_cpu_cores
datasource_your_namespace_node_allocatable_memory_bytes
datasource_your_namespace_node_capacity_cpu_cores
datasource_your_namespace_node_capacity_memory_bytes
datasource_your_namespace_node_cpu_allocatable_raw
datasource_your_namespace_node_cpu_capacity_raw
datasource_your_namespace_node_memory_allocatable_raw
datasource_your_namespace_node_memory_capacity_raw
datasource_your_namespace_persistentvolumeclaim_capacity_bytes
datasource_your_namespace_persistentvolumeclaim_capacity_raw
datasource_your_namespace_persistentvolumeclaim_phase
datasource_your_namespace_persistentvolumeclaim_phase_raw
datasource_your_namespace_persistentvolumeclaim_request_bytes
datasource_your_namespace_persistentvolumeclaim_request_raw
datasource_your_namespace_persistentvolumeclaim_usage_bytes
datasource_your_namespace_persistentvolumeclaim_usage_raw
datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw
datasource_your_namespace_pod_cpu_request_raw
datasource_your_namespace_pod_cpu_usage_raw
datasource_your_namespace_pod_limit_cpu_cores
datasource_your_namespace_pod_limit_memory_bytes
datasource_your_namespace_pod_memory_request_raw
datasource_your_namespace_pod_memory_usage_raw
datasource_your_namespace_pod_persistentvolumeclaim_request_info
datasource_your_namespace_pod_request_cpu_cores
datasource_your_namespace_pod_request_memory_bytes
datasource_your_namespace_pod_usage_cpu_cores
datasource_your_namespace_pod_usage_memory_bytes
(32 rows)
Query 20210503_175727_00107_3venm, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0:02 [32 rows, 2.23KB] [19 rows/s, 1.37KB/s]
presto:default>
8.2.3. Query Hive using beeline Copiar enlaceEnlace copiado en el portapapeles!
The following opens an interactive beeline session where you can query Hive. This session runs in the same container as Hive and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Hive pod.
$ oc -n openshift-metering exec -it $(oc -n openshift-metering get pods -l app=hive,hive=server -o name | cut -d/ -f2) \
-c hiveserver2 -- beeline -u 'jdbc:hive2://127.0.0.1:10000/default;auth=noSasl'
Once you run this command, a prompt appears where you can run queries. Use the
show tables;
$ 0: jdbc:hive2://127.0.0.1:10000/default> show tables from metering;
Example output
+----------------------------------------------------+
| tab_name |
+----------------------------------------------------+
| datasource_your_namespace_cluster_cpu_capacity_raw |
| datasource_your_namespace_cluster_cpu_usage_raw |
| datasource_your_namespace_cluster_memory_capacity_raw |
| datasource_your_namespace_cluster_memory_usage_raw |
| datasource_your_namespace_node_allocatable_cpu_cores |
| datasource_your_namespace_node_allocatable_memory_bytes |
| datasource_your_namespace_node_capacity_cpu_cores |
| datasource_your_namespace_node_capacity_memory_bytes |
| datasource_your_namespace_node_cpu_allocatable_raw |
| datasource_your_namespace_node_cpu_capacity_raw |
| datasource_your_namespace_node_memory_allocatable_raw |
| datasource_your_namespace_node_memory_capacity_raw |
| datasource_your_namespace_persistentvolumeclaim_capacity_bytes |
| datasource_your_namespace_persistentvolumeclaim_capacity_raw |
| datasource_your_namespace_persistentvolumeclaim_phase |
| datasource_your_namespace_persistentvolumeclaim_phase_raw |
| datasource_your_namespace_persistentvolumeclaim_request_bytes |
| datasource_your_namespace_persistentvolumeclaim_request_raw |
| datasource_your_namespace_persistentvolumeclaim_usage_bytes |
| datasource_your_namespace_persistentvolumeclaim_usage_raw |
| datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw |
| datasource_your_namespace_pod_cpu_request_raw |
| datasource_your_namespace_pod_cpu_usage_raw |
| datasource_your_namespace_pod_limit_cpu_cores |
| datasource_your_namespace_pod_limit_memory_bytes |
| datasource_your_namespace_pod_memory_request_raw |
| datasource_your_namespace_pod_memory_usage_raw |
| datasource_your_namespace_pod_persistentvolumeclaim_request_info |
| datasource_your_namespace_pod_request_cpu_cores |
| datasource_your_namespace_pod_request_memory_bytes |
| datasource_your_namespace_pod_usage_cpu_cores |
| datasource_your_namespace_pod_usage_memory_bytes |
+----------------------------------------------------+
32 rows selected (13.101 seconds)
0: jdbc:hive2://127.0.0.1:10000/default>
8.2.4. Port-forward to the Hive web UI Copiar enlaceEnlace copiado en el portapapeles!
Run the following command to port-forward to the Hive web UI:
$ oc -n openshift-metering port-forward hive-server-0 10002
You can now open http://127.0.0.1:10002 in your browser window to view the Hive web interface.
8.2.5. Port-forward to HDFS Copiar enlaceEnlace copiado en el portapapeles!
Run the following command to port-forward to the HDFS namenode:
$ oc -n openshift-metering port-forward hdfs-namenode-0 9870
You can now open http://127.0.0.1:9870 in your browser window to view the HDFS web interface.
Run the following command to port-forward to the first HDFS datanode:
$ oc -n openshift-metering port-forward hdfs-datanode-0 9864
- 1
- To check other datanodes, replace
hdfs-datanode-0with the pod you want to view information on.
8.2.6. Metering Ansible Operator Copiar enlaceEnlace copiado en el portapapeles!
Metering uses the Ansible Operator to watch and reconcile resources in a cluster environment. When debugging a failed metering installation, it can be helpful to view the Ansible logs or status of your
MeteringConfig
8.2.6.1. Accessing Ansible logs Copiar enlaceEnlace copiado en el portapapeles!
In the default installation, the Metering Operator is deployed as a pod. In this case, you can check the logs of the Ansible container within this pod:
$ oc -n openshift-metering logs $(oc -n openshift-metering get pods -l app=metering-operator -o name | cut -d/ -f2) -c ansible
Alternatively, you can view the logs of the Operator container (replace
-c ansible
-c operator
8.2.6.2. Checking the MeteringConfig Status Copiar enlaceEnlace copiado en el portapapeles!
It can be helpful to view the
.status
MeteringConfig
Invalid
$ oc -n openshift-metering get meteringconfig operator-metering -o=jsonpath='{.status.conditions[?(@.type=="Invalid")].message}'
8.2.6.3. Checking MeteringConfig Events Copiar enlaceEnlace copiado en el portapapeles!
Check events that the Metering Operator is generating. This can be helpful during installation or upgrade to debug any resource failures. Sort events by the last timestamp:
$ oc -n openshift-metering get events --field-selector involvedObject.kind=MeteringConfig --sort-by='.lastTimestamp'
Example output with latest changes in the MeteringConfig resources
LAST SEEN TYPE REASON OBJECT MESSAGE
4m40s Normal Validating meteringconfig/operator-metering Validating the user-provided configuration
4m30s Normal Started meteringconfig/operator-metering Configuring storage for the metering-ansible-operator
4m26s Normal Started meteringconfig/operator-metering Configuring TLS for the metering-ansible-operator
3m58s Normal Started meteringconfig/operator-metering Configuring reporting for the metering-ansible-operator
3m53s Normal Reconciling meteringconfig/operator-metering Reconciling metering resources
3m47s Normal Reconciling meteringconfig/operator-metering Reconciling monitoring resources
3m41s Normal Reconciling meteringconfig/operator-metering Reconciling HDFS resources
3m23s Normal Reconciling meteringconfig/operator-metering Reconciling Hive resources
2m59s Normal Reconciling meteringconfig/operator-metering Reconciling Presto resources
2m35s Normal Reconciling meteringconfig/operator-metering Reconciling reporting-operator resources
2m14s Normal Reconciling meteringconfig/operator-metering Reconciling reporting resources
Chapter 9. Uninstalling metering Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
You can remove metering from your OpenShift Container Platform cluster.
Metering does not manage or delete Amazon S3 bucket data. After uninstalling metering, you must manually clean up S3 buckets that were used to store metering data.
9.1. Removing the Metering Operator from your cluster Copiar enlaceEnlace copiado en el portapapeles!
Remove the Metering Operator from your cluster by following the documentation on deleting Operators from a cluster.
Removing the Metering Operator from your cluster does not remove its custom resource definitions or managed resources. See the following sections on Uninstalling a metering namespace and Uninstalling metering custom resource definitions for steps to remove any remaining metering components.
9.2. Uninstalling a metering namespace Copiar enlaceEnlace copiado en el portapapeles!
Uninstall your metering namespace, for example the
openshift-metering
MeteringConfig
openshift-metering
Prerequisites
- The Metering Operator is removed from your cluster.
Procedure
Remove all resources created by the Metering Operator:
$ oc --namespace openshift-metering delete meteringconfig --allAfter the previous step is complete, verify that all pods in the
namespace are deleted or are reporting a terminating state:openshift-metering$ oc --namespace openshift-metering get podsDelete the
namespace:openshift-metering$ oc delete namespace openshift-metering
9.3. Uninstalling metering custom resource definitions Copiar enlaceEnlace copiado en el portapapeles!
The metering custom resource definitions (CRDs) remain in the cluster after the Metering Operator is uninstalled and the
openshift-metering
Deleting the metering CRDs disrupts any additional metering installations in other namespaces in your cluster. Ensure that there are no other metering installations before proceeding.
Prerequisites
-
The custom resource in the
MeteringConfignamespace is deleted.openshift-metering -
The namespace is deleted.
openshift-metering
Procedure
Delete the remaining metering CRDs:
$ oc get crd -o name | grep "metering.openshift.io" | xargs oc delete
Legal Notice
Copiar enlaceEnlace copiado en el portapapeles!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.