This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Chapter 5. Reports
5.1. About Reports
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
				A Report custom resource provides a method to manage periodic Extract Transform and Load (ETL) jobs using SQL queries. Reports are composed from other metering resources, such as ReportQuery resources that provide the actual SQL query to run, and ReportDataSource resources that define the data available to the ReportQuery and Report resources.
			
				Many use cases are addressed by the predefined ReportQuery and ReportDataSource resources that come installed with metering. Therefore, you do not need to define your own unless you have a use case that is not covered by these predefined resources.
			
5.1.1. Reports
					The Report custom resource is used to manage the execution and status of reports. Metering produces reports derived from usage data sources, which can be used in further analysis and filtering. A single Report resource represents a job that manages a database table and updates it with new information according to a schedule. The report exposes the data in that table via the Reporting Operator HTTP API.
				
					Reports with a spec.schedule field set are always running and track what time periods it has collected data for. This ensures that if metering is shutdown or unavailable for an extended period of time, it backfills the data starting where it left off. If the schedule is unset, then the report runs once for the time specified by the reportingStart and reportingEnd. By default, reports wait for ReportDataSource resources to have fully imported any data covered in the reporting period. If the report has a schedule, it waits to run until the data in the period currently being processed has finished importing.
				
5.1.1.1. Example report with a schedule
						The following example Report object contains information on every pod’s CPU requests, and runs every hour, adding the last hours worth of data each time it runs.
					
5.1.1.2. Example report without a schedule (run-once)
						The following example Report object contains information on every pod’s CPU requests for all of July. After completion, it does not run again.
					
5.1.1.3. query
						The query field names the ReportQuery resource used to generate the report. The report query controls the schema of the report as well as how the results are processed.
					
						query is a required field.
					
						Use the following command to list available ReportQuery resources:
					
oc -n openshift-metering get reportqueries
$ oc -n openshift-metering get reportqueriesExample output
						Report queries with the -raw suffix are used by other ReportQuery resources to build more complex queries, and should not be used directly for reports.
					
						namespace- prefixed queries aggregate pod CPU and memory requests by namespace, providing a list of namespaces and their overall usage based on resource requests.
					
						pod- prefixed queries are similar to namespace- prefixed queries but aggregate information by pod rather than namespace. These queries include the pod’s namespace and node.
					
						node- prefixed queries return information about each node’s total available resources.
					
						aws- prefixed queries are specific to AWS. Queries suffixed with -aws return the same data as queries of the same name without the suffix, and correlate usage with the EC2 billing data.
					
						The aws-ec2-billing-data report is used by other queries, and should not be used as a standalone report. The aws-ec2-cluster-cost report provides a total cost based on the nodes included in the cluster, and the sum of their costs for the time period being reported on.
					
						Use the following command to get the ReportQuery resource as YAML, and check the spec.columns field. For example, run:
					
oc -n openshift-metering get reportqueries namespace-memory-request -o yaml
$ oc -n openshift-metering get reportqueries namespace-memory-request -o yamlExample output
5.1.1.4. schedule
						The spec.schedule configuration block defines when the report runs. The main fields in the schedule section are period, and then depending on the value of period, the fields hourly, daily, weekly, and monthly allow you to fine-tune when the report runs.
					
						For example, if period is set to weekly, you can add a weekly field to the spec.schedule block. The following example will run once a week on Wednesday, at 1 PM (hour 13 in the day).
					
5.1.1.4.1. period
							Valid values of schedule.period are listed below, and the options available to set for a given period are also listed.
						
- hourly- 
											minute
- 
											second
 
- 
											
- daily- 
											hour
- 
											minute
- 
											second
 
- 
											
- weekly- 
											dayOfWeek
- 
											hour
- 
											minute
- 
											second
 
- 
											
- monthly- 
											dayOfMonth
- 
											hour
- 
											minute
- 
											second
 
- 
											
- cron- 
											expression
 
- 
											
							Generally, the hour, minute, second fields control when in the day the report runs, and dayOfWeek/dayOfMonth control what day of the week, or day of month the report runs on, if it is a weekly or monthly report period.
						
For each of these fields, there is a range of valid values:
- 
									houris an integer value between 0-23.
- 
									minuteis an integer value between 0-59.
- 
									secondis an integer value between 0-59.
- 
									dayOfWeekis a string value that expects the day of the week (spelled out).
- 
									dayOfMonthis an integer value between 1-31.
For cron periods, normal cron expressions are valid:
- 
									expression: "*/5 * * * *"
5.1.1.5. reportingStart
						To support running a report against existing data, you can set the spec.reportingStart field to a RFC3339 timestamp to tell the report to run according to its schedule starting from reportingStart rather than the current time.
					
							Setting the spec.reportingStart field to a specific time will result in the Reporting Operator running many queries in succession for each interval in the schedule that is between the reportingStart time and the current time. This could be thousands of queries if the period is less than daily and the reportingStart is more than a few months back. If reportingStart is left unset, the report will run at the next full reportingPeriod after the time the report is created.
						
						As an example of how to use this field, if you had data already collected dating back to January 1st, 2019 that you want to include in your Report object, you can create a report with the following values:
					
5.1.1.6. reportingEnd
						To configure a report to only run until a specified time, you can set the spec.reportingEnd field to an RFC3339 timestamp. The value of this field will cause the report to stop running on its schedule after it has finished generating reporting data for the period covered from its start time until reportingEnd.
					
						Because a schedule will most likely not align with the reportingEnd, the last period in the schedule will be shortened to end at the specified reportingEnd time. If left unset, then the report will run forever, or until a reportingEnd is set on the report.
					
For example, if you want to create a report that runs once a week for the month of July:
5.1.1.7. expiration
						Add the expiration field to set a retention period on a scheduled metering report. You can avoid manually removing the report by setting the expiration duration value. The retention period is equal to the Report object creationDate plus the expiration duration. The report is removed from the cluster at the end of the retention period if no other reports or report queries depend on the expiring report. Deleting the report from the cluster can take several minutes.
					
							Setting the expiration field is not recommended for roll-up or aggregated reports. If a report is depended upon by other reports or report queries, then the report is not removed at the end of the retention period. You can view the report-operator logs at debug level for the timing output around a report retention decision.
						
						For example, the following scheduled report is deleted 30 minutes after the creationDate of the report:
					
- 1
- Valid time units for theexpirationduration arens,us(orµs),ms,s,m, andh.
							The expiration retention period for a Report object is not precise and works on the order of several minutes, not nanoseconds.
						
5.1.1.8. runImmediately
						When runImmediately is set to true, the report runs immediately. This behavior ensures that the report is immediately processed and queued without requiring additional scheduling parameters.
					
							When runImmediately is set to true, you must set a reportingEnd and reportingStart value.
						
5.1.1.9. inputs
						The spec.inputs field of a Report object can be used to override or set values defined in a ReportQuery resource’s spec.inputs field.
					
						spec.inputs is a list of name-value pairs:
					
spec:
  inputs:
  - name: "NamespaceCPUUsageReportName" 
    value: "namespace-cpu-usage-hourly" 
spec:
  inputs:
  - name: "NamespaceCPUUsageReportName" 
    value: "namespace-cpu-usage-hourly" 5.1.1.10. Roll-up reports
Report data is stored in the database much like metrics themselves, and therefore, can be used in aggregated or roll-up reports. A simple use case for a roll-up report is to spread the time required to produce a report over a longer period of time. This is instead of requiring a monthly report to query and add all data over an entire month. For example, the task can be split into daily reports that each run over 1/30 of the data.
						A custom roll-up report requires a custom report query. The ReportQuery resource template processor provides a reportTableName function that can get the necessary table name from a Report object’s metadata.name.
					
Below is a snippet taken from a built-in query:
pod-cpu.yaml
Example aggregated-report.yaml roll-up report
spec:
  query: "namespace-cpu-usage"
  inputs:
  - name: "NamespaceCPUUsageReportName"
    value: "namespace-cpu-usage-hourly"
spec:
  query: "namespace-cpu-usage"
  inputs:
  - name: "NamespaceCPUUsageReportName"
    value: "namespace-cpu-usage-hourly"5.1.1.10.1. Report status
The execution of a scheduled report can be tracked using its status field. Any errors occurring during the preparation of a report will be recorded here.
							The status field of a Report object currently has two fields:
						
- 
									conditions: Conditions is a list of conditions, each of which have atype,status,reason, andmessagefield. Possible values of a condition’stypefield areRunningandFailure, indicating the current state of the scheduled report. Thereasonindicates why itsconditionis in its current state with thestatusbeing eithertrue,falseor,unknown. Themessageprovides a human readable indicating why the condition is in the current state. For detailed information on thereasonvalues, seepkg/apis/metering/v1/util/report_util.go.
- 
									lastReportTime: Indicates the time metering has collected data up to.
5.2. Storage locations
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
				A StorageLocation custom resource configures where data will be stored by the Reporting Operator. This includes the data collected from Prometheus, and the results produced by generating a Report custom resource.
			
				You only need to configure a StorageLocation custom resource if you want to store data in multiple locations, like multiple S3 buckets or both S3 and HDFS, or if you wish to access a database in Hive and Presto that was not created by metering. For most users this is not a requirement, and the documentation on configuring metering is sufficient to configure all necessary storage components.
			
5.2.1. Storage location examples
					The following example shows the built-in local storage option, and is configured to use Hive. By default, data is stored wherever Hive is configured to use storage, such as HDFS, S3, or a ReadWriteMany persistent volume claim (PVC).
				
Local storage example
- 1
- If thehivesection is present, then theStorageLocationresource will be configured to store data in Presto by creating the table using the Hive server. OnlydatabaseNameandunmanagedDatabaseare required fields.
- 2
- The name of the database within hive.
- 3
- Iftrue, theStorageLocationresource will not be actively managed, and thedatabaseNameis expected to already exist in Hive. Iffalse, the Reporting Operator will create the database in Hive.
The following example uses an AWS S3 bucket for storage. The prefix is appended to the bucket name when constructing the path to use.
Remote storage example
- 1
- Optional: The filesystem URL for Presto and Hive to use for the database. This can be anhdfs://ors3a://filesystem URL.
					There are additional optional fields that can be specified in the hive section:
				
- 
							defaultTableProperties: Contains configuration options for creating tables using Hive.
- 
							fileFormat: The file format used for storing files in the filesystem. See the Hive Documentation on File Storage Format for a list of options and more details.
- 
							rowFormat: Controls the Hive row format. This controls how Hive serializes and deserializes rows. See the Hive Documentation on Row Formats and SerDe for more details.
5.2.2. Default storage location
					If an annotation storagelocation.metering.openshift.io/is-default exists and is set to true on a StorageLocation resource, then that resource becomes the default storage resource. Any components with a storage configuration option where the storage location is not specified will use the default storage resource. There can be only one default storage resource. If more than one resource with the annotation exists, an error is logged because the Reporting Operator cannot determine the default.
				
Default storage example