Este contenido no está disponible en el idioma seleccionado.
Chapter 5. Reports
5.1. About Reports Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
A
Report
ReportQuery
ReportDataSource
ReportQuery
Report
Many use cases are addressed by the predefined
ReportQuery
ReportDataSource
5.1.1. Reports Copiar enlaceEnlace copiado en el portapapeles!
The
Report
Report
Reports with a
spec.schedule
reportingStart
reportingEnd
ReportDataSource
5.1.1.1. Example report with a schedule Copiar enlaceEnlace copiado en el portapapeles!
The following example
Report
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
reportingStart: "2021-07-01T00:00:00Z"
schedule:
period: "hourly"
hourly:
minute: 0
second: 0
5.1.1.2. Example report without a schedule (run-once) Copiar enlaceEnlace copiado en el portapapeles!
The following example
Report
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
reportingStart: "2021-07-01T00:00:00Z"
reportingEnd: "2021-07-31T00:00:00Z"
5.1.1.3. query Copiar enlaceEnlace copiado en el portapapeles!
The
query
ReportQuery
query is a required field.
Use the following command to list available
ReportQuery
$ oc -n openshift-metering get reportqueries
Example output
NAME AGE
cluster-cpu-capacity 23m
cluster-cpu-capacity-raw 23m
cluster-cpu-usage 23m
cluster-cpu-usage-raw 23m
cluster-cpu-utilization 23m
cluster-memory-capacity 23m
cluster-memory-capacity-raw 23m
cluster-memory-usage 23m
cluster-memory-usage-raw 23m
cluster-memory-utilization 23m
cluster-persistentvolumeclaim-request 23m
namespace-cpu-request 23m
namespace-cpu-usage 23m
namespace-cpu-utilization 23m
namespace-memory-request 23m
namespace-memory-usage 23m
namespace-memory-utilization 23m
namespace-persistentvolumeclaim-request 23m
namespace-persistentvolumeclaim-usage 23m
node-cpu-allocatable 23m
node-cpu-allocatable-raw 23m
node-cpu-capacity 23m
node-cpu-capacity-raw 23m
node-cpu-utilization 23m
node-memory-allocatable 23m
node-memory-allocatable-raw 23m
node-memory-capacity 23m
node-memory-capacity-raw 23m
node-memory-utilization 23m
persistentvolumeclaim-capacity 23m
persistentvolumeclaim-capacity-raw 23m
persistentvolumeclaim-phase-raw 23m
persistentvolumeclaim-request 23m
persistentvolumeclaim-request-raw 23m
persistentvolumeclaim-usage 23m
persistentvolumeclaim-usage-raw 23m
persistentvolumeclaim-usage-with-phase-raw 23m
pod-cpu-request 23m
pod-cpu-request-raw 23m
pod-cpu-usage 23m
pod-cpu-usage-raw 23m
pod-memory-request 23m
pod-memory-request-raw 23m
pod-memory-usage 23m
pod-memory-usage-raw 23m
Report queries with the
-raw
ReportQuery
namespace-
pod-
namespace-
node-
aws-
-aws
The
aws-ec2-billing-data
aws-ec2-cluster-cost
Use the following command to get the
ReportQuery
spec.columns
$ oc -n openshift-metering get reportqueries namespace-memory-request -o yaml
Example output
apiVersion: metering.openshift.io/v1
kind: ReportQuery
metadata:
name: namespace-memory-request
labels:
operator-metering: "true"
spec:
columns:
- name: period_start
type: timestamp
unit: date
- name: period_end
type: timestamp
unit: date
- name: namespace
type: varchar
unit: kubernetes_namespace
- name: pod_request_memory_byte_seconds
type: double
unit: byte_seconds
5.1.1.4. schedule Copiar enlaceEnlace copiado en el portapapeles!
The
spec.schedule
schedule
period
period
hourly
daily
weekly
monthly
For example, if
period
weekly
weekly
spec.schedule
...
schedule:
period: "weekly"
weekly:
dayOfWeek: "wednesday"
hour: 13
...
5.1.1.4.1. period Copiar enlaceEnlace copiado en el portapapeles!
Valid values of
schedule.period
hourly-
minute -
second
-
daily-
hour -
minute -
second
-
weekly-
dayOfWeek -
hour -
minute -
second
-
monthly-
dayOfMonth -
hour -
minute -
second
-
cron-
expression
-
Generally, the
hour
minute
second
dayOfWeek
dayOfMonth
For each of these fields, there is a range of valid values:
-
is an integer value between 0-23.
hour -
is an integer value between 0-59.
minute -
is an integer value between 0-59.
second -
is a string value that expects the day of the week (spelled out).
dayOfWeek -
is an integer value between 1-31.
dayOfMonth
For cron periods, normal cron expressions are valid:
-
expression: "*/5 * * * *"
5.1.1.5. reportingStart Copiar enlaceEnlace copiado en el portapapeles!
To support running a report against existing data, you can set the
spec.reportingStart
schedule
reportingStart
Setting the
spec.reportingStart
reportingStart
reportingStart
reportingStart
reportingPeriod
As an example of how to use this field, if you had data already collected dating back to January 1st, 2019 that you want to include in your
Report
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
schedule:
period: "hourly"
reportingStart: "2021-01-01T00:00:00Z"
5.1.1.6. reportingEnd Copiar enlaceEnlace copiado en el portapapeles!
To configure a report to only run until a specified time, you can set the
spec.reportingEnd
reportingEnd
Because a schedule will most likely not align with the
reportingEnd
reportingEnd
reportingEnd
For example, if you want to create a report that runs once a week for the month of July:
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
schedule:
period: "weekly"
reportingStart: "2021-07-01T00:00:00Z"
reportingEnd: "2021-07-31T00:00:00Z"
5.1.1.7. expiration Copiar enlaceEnlace copiado en el portapapeles!
Add the
expiration
expiration
Report
creationDate
expiration
Setting the
expiration
report-operator
For example, the following scheduled report is deleted 30 minutes after the
creationDate
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: pod-cpu-request-hourly
spec:
query: "pod-cpu-request"
schedule:
period: "weekly"
reportingStart: "2021-07-01T00:00:00Z"
expiration: "30m"
- 1
- Valid time units for the
expirationduration arens,us(orµs),ms,s,m, andh.
The
expiration
Report
5.1.1.8. runImmediately Copiar enlaceEnlace copiado en el portapapeles!
When
runImmediately
true
When
runImmediately
true
reportingEnd
reportingStart
5.1.1.9. inputs Copiar enlaceEnlace copiado en el portapapeles!
The
spec.inputs
Report
ReportQuery
spec.inputs
spec.inputs
spec:
inputs:
- name: "NamespaceCPUUsageReportName"
value: "namespace-cpu-usage-hourly"
5.1.1.10. Roll-up reports Copiar enlaceEnlace copiado en el portapapeles!
Report data is stored in the database much like metrics themselves, and therefore, can be used in aggregated or roll-up reports. A simple use case for a roll-up report is to spread the time required to produce a report over a longer period of time. This is instead of requiring a monthly report to query and add all data over an entire month. For example, the task can be split into daily reports that each run over 1/30 of the data.
A custom roll-up report requires a custom report query. The
ReportQuery
reportTableName
Report
metadata.name
Below is a snippet taken from a built-in query:
pod-cpu.yaml
spec:
...
inputs:
- name: ReportingStart
type: time
- name: ReportingEnd
type: time
- name: NamespaceCPUUsageReportName
type: Report
- name: PodCpuUsageRawDataSourceName
type: ReportDataSource
default: pod-cpu-usage-raw
...
query: |
...
{|- if .Report.Inputs.NamespaceCPUUsageReportName |}
namespace,
sum(pod_usage_cpu_core_seconds) as pod_usage_cpu_core_seconds
FROM {| .Report.Inputs.NamespaceCPUUsageReportName | reportTableName |}
...
Example aggregated-report.yaml roll-up report
spec:
query: "namespace-cpu-usage"
inputs:
- name: "NamespaceCPUUsageReportName"
value: "namespace-cpu-usage-hourly"
5.1.1.10.1. Report status Copiar enlaceEnlace copiado en el portapapeles!
The execution of a scheduled report can be tracked using its status field. Any errors occurring during the preparation of a report will be recorded here.
The
status
Report
-
: Conditions is a list of conditions, each of which have a
conditions,type,status, andreasonfield. Possible values of a condition’smessagefield aretypeandRunning, indicating the current state of the scheduled report. TheFailureindicates why itsreasonis in its current state with theconditionbeing eitherstatus,trueor,false. Theunknownprovides a human readable indicating why the condition is in the current state. For detailed information on themessagevalues, seereasonpkg/apis/metering/v1/util/report_util.go. -
: Indicates the time metering has collected data up to.
lastReportTime
5.2. Storage locations Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
A
StorageLocation
Report
You only need to configure a
StorageLocation
5.2.1. Storage location examples Copiar enlaceEnlace copiado en el portapapeles!
The following example shows the built-in local storage option, and is configured to use Hive. By default, data is stored wherever Hive is configured to use storage, such as HDFS, S3, or a
ReadWriteMany
Local storage example
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: hive
labels:
operator-metering: "true"
spec:
hive:
databaseName: metering
unmanagedDatabase: false
- 1
- If the
hivesection is present, then theStorageLocationresource will be configured to store data in Presto by creating the table using the Hive server. OnlydatabaseNameandunmanagedDatabaseare required fields. - 2
- The name of the database within hive.
- 3
- If
true, theStorageLocationresource will not be actively managed, and thedatabaseNameis expected to already exist in Hive. Iffalse, the Reporting Operator will create the database in Hive.
The following example uses an AWS S3 bucket for storage. The prefix is appended to the bucket name when constructing the path to use.
Remote storage example
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: example-s3-storage
labels:
operator-metering: "true"
spec:
hive:
databaseName: example_s3_storage
unmanagedDatabase: false
location: "s3a://bucket-name/path/within/bucket"
- 1
- Optional: The filesystem URL for Presto and Hive to use for the database. This can be an
hdfs://ors3a://filesystem URL.
There are additional optional fields that can be specified in the
hive
-
: Contains configuration options for creating tables using Hive.
defaultTableProperties -
: The file format used for storing files in the filesystem. See the Hive Documentation on File Storage Format for a list of options and more details.
fileFormat -
: Controls the Hive row format. This controls how Hive serializes and deserializes rows. See the Hive Documentation on Row Formats and SerDe for more details.
rowFormat
5.2.2. Default storage location Copiar enlaceEnlace copiado en el portapapeles!
If an annotation
storagelocation.metering.openshift.io/is-default
true
StorageLocation
Default storage example
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: example-s3-storage
labels:
operator-metering: "true"
annotations:
storagelocation.metering.openshift.io/is-default: "true"
spec:
hive:
databaseName: example_s3_storage
unmanagedDatabase: false
location: "s3a://bucket-name/path/within/bucket"