Chapter 4. Configuring metering
4.1. About configuring metering
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
The MeteringConfig
custom resource specifies all the configuration details for your metering installation. When you first install the metering stack, a default MeteringConfig
custom resource is generated. Use the examples in the documentation to modify this default file. Keep in mind the following key points:
- At a minimum, you need to configure persistent storage and configure the Hive metastore.
- Most default configuration settings work, but larger deployments or highly customized deployments should review all configuration options carefully.
- Some configuration options can not be modified after installation.
For configuration options that can be modified after installation, make the changes in your MeteringConfig
custom resource and reapply the file.
4.2. Common configuration options
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
4.2.1. Resource requests and limits
You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The default-resource-limits.yaml
below provides an example of setting resource request and limits for each component.
apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: reporting-operator: spec: resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi presto: spec: coordinator: resources: limits: cpu: 4 memory: 4Gi requests: cpu: 2 memory: 2Gi worker: replicas: 0 resources: limits: cpu: 8 memory: 8Gi requests: cpu: 4 memory: 2Gi hive: spec: metastore: resources: limits: cpu: 4 memory: 2Gi requests: cpu: 500m memory: 650Mi storage: class: null create: true size: 5Gi server: resources: limits: cpu: 1 memory: 1Gi requests: cpu: 500m memory: 500Mi
4.2.2. Node selectors
You can run the metering components on specific sets of nodes. Set the nodeSelector
on a metering component to control where the component is scheduled. The node-selectors.yaml
file below provides an example of setting node selectors for each component.
Add the openshift.io/node-selector: ""
namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. Specify ""
as the annotation value.
apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: reporting-operator: spec: nodeSelector: "node-role.kubernetes.io/infra": "" 1 presto: spec: coordinator: nodeSelector: "node-role.kubernetes.io/infra": "" 2 worker: nodeSelector: "node-role.kubernetes.io/infra": "" 3 hive: spec: metastore: nodeSelector: "node-role.kubernetes.io/infra": "" 4 server: nodeSelector: "node-role.kubernetes.io/infra": "" 5
Add the openshift.io/node-selector: ""
namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. When the openshift.io/node-selector
annotation is set on the project, the value is used in preference to the value of the spec.defaultNodeSelector
field in the cluster-wide Scheduler
object.
Verification
You can verify the metering node selectors by performing any of the following checks:
Verify that all pods for metering are correctly scheduled on the IP of the node that is configured in the
MeteringConfig
custom resource:Check all pods in the
openshift-metering
namespace:$ oc --namespace openshift-metering get pods -o wide
The output shows the
NODE
and correspondingIP
for each pod running in theopenshift-metering
namespace.Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hive-metastore-0 1/2 Running 0 4m33s 10.129.2.26 ip-10-0-210-167.us-east-2.compute.internal <none> <none> hive-server-0 2/3 Running 0 4m21s 10.128.2.26 ip-10-0-150-175.us-east-2.compute.internal <none> <none> metering-operator-964b4fb55-4p699 2/2 Running 0 7h30m 10.131.0.33 ip-10-0-189-6.us-east-2.compute.internal <none> <none> nfs-server 1/1 Running 0 7h30m 10.129.2.24 ip-10-0-210-167.us-east-2.compute.internal <none> <none> presto-coordinator-0 2/2 Running 0 4m8s 10.131.0.35 ip-10-0-189-6.us-east-2.compute.internal <none> <none> reporting-operator-869b854c78-8g2x5 1/2 Running 0 7h27m 10.128.2.25 ip-10-0-150-175.us-east-2.compute.internal <none> <none>
Compare the nodes in the
openshift-metering
namespace to each nodeNAME
in your cluster:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.21.0+6025c28 ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.21.0+6025c28 ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.21.0+6025c28 ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.21.0+6025c28 ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.21.0+6025c28 ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.21.0+6025c28
Verify that the node selector configuration in the
MeteringConfig
custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand pods are scheduled.Check the cluster-wide
Scheduler
object for thespec.defaultNodeSelector
field, which shows where pods are scheduled by default:$ oc get schedulers.config.openshift.io cluster -o yaml
4.3. Configuring persistent storage
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Metering requires persistent storage to persist data collected by the Metering Operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.
4.3.1. Storing data in Amazon S3
Metering can use an existing Amazon S3 bucket or create a bucket for storage.
Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data.
Procedure
Edit the
spec.storage
section in thes3-storage.yaml
file:Example
s3-storage.yaml
fileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3" s3: bucket: "bucketname/path/" 1 region: "us-west-1" 2 secretName: "my-aws-secret" 3 # Set to false if you want to provide an existing bucket, instead of # having metering create the bucket on your behalf. createBucket: true 4
- 1
- Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket.
- 2
- Specify the region of your bucket.
- 3
- The name of a secret in the metering namespace containing the AWS credentials in the
data.aws-access-key-id
anddata.aws-secret-access-key
fields. See the exampleSecret
object below for more details. - 4
- Set this field to
false
if you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that haveCreateBucket
permissions.
Use the following
Secret
object as a template:Example AWS
Secret
objectapiVersion: v1 kind: Secret metadata: name: my-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg=="
NoteThe values of the
aws-access-key-id
andaws-secret-access-key
must be base64 encoded.Create the secret:
$ oc create secret -n openshift-metering generic my-aws-secret \ --from-literal=aws-access-key-id=my-access-key \ --from-literal=aws-secret-access-key=my-secret-key
NoteThis command automatically base64 encodes your
aws-access-key-id
andaws-secret-access-key
values.
The aws-access-key-id
and aws-secret-access-key
credentials must have read and write access to the bucket. The following aws/read-write.json
file shows an IAM policy that grants the required permissions:
Example aws/read-write.json
file
{ "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:HeadBucket", "s3:ListBucket", "s3:ListMultipartUploadParts", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::operator-metering-data/*", "arn:aws:s3:::operator-metering-data" ] } ] }
If spec.storage.hive.s3.createBucket
is set to true
or unset in your s3-storage.yaml
file, then you should use the aws/read-write-create.json
file that contains permissions for creating and deleting buckets:
Example aws/read-write-create.json
file
{ "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:HeadBucket", "s3:ListBucket", "s3:CreateBucket", "s3:DeleteBucket", "s3:ListMultipartUploadParts", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::operator-metering-data/*", "arn:aws:s3:::operator-metering-data" ] } ] }
4.3.2. Storing data in S3-compatible storage
You can use S3-compatible storage such as Noobaa.
Procedure
Edit the
spec.storage
section in thes3-compatible-storage.yaml
file:Example
s3-compatible-storage.yaml
fileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3Compatible" s3Compatible: bucket: "bucketname" 1 endpoint: "http://example:port-number" 2 secretName: "my-aws-secret" 3
Use the following
Secret
object as a template:Example S3-compatible
Secret
objectapiVersion: v1 kind: Secret metadata: name: my-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg=="
4.3.3. Storing data in Microsoft Azure
To store data in Azure blob storage, you must use an existing container.
Procedure
Edit the
spec.storage
section in theazure-blob-storage.yaml
file:Example
azure-blob-storage.yaml
fileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "azure" azure: container: "bucket1" 1 secretName: "my-azure-secret" 2 rootDirectory: "/testDir" 3
Use the following
Secret
object as a template:Example Azure
Secret
objectapiVersion: v1 kind: Secret metadata: name: my-azure-secret data: azure-storage-account-name: "dGVzdAo=" azure-secret-access-key: "c2VjcmV0Cg=="
Create the secret:
$ oc create secret -n openshift-metering generic my-azure-secret \ --from-literal=azure-storage-account-name=my-storage-account-name \ --from-literal=azure-secret-access-key=my-secret-key
4.3.4. Storing data in Google Cloud Storage
To store your data in Google Cloud Storage, you must use an existing bucket.
Procedure
Edit the
spec.storage
section in thegcs-storage.yaml
file:Example
gcs-storage.yaml
fileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "gcs" gcs: bucket: "metering-gcs/test1" 1 secretName: "my-gcs-secret" 2
Use the following
Secret
object as a template:Example Google Cloud Storage
Secret
objectapiVersion: v1 kind: Secret metadata: name: my-gcs-secret data: gcs-service-account.json: "c2VjcmV0Cg=="
Create the secret:
$ oc create secret -n openshift-metering generic my-gcs-secret \ --from-file gcs-service-account.json=/path/to/my/service-account-key.json
4.4. Configuring the Hive metastore
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Hive metastore is responsible for storing all the metadata about the database tables created in Presto and Hive. By default, the metastore stores this information in a local embedded Derby database in a persistent volume attached to the pod.
Generally, the default configuration of the Hive metastore works for small clusters, but users may wish to improve performance or move storage requirements out of cluster by using a dedicated SQL database for storing the Hive metastore data.
4.4.1. Configuring persistent volumes
By default, Hive requires one persistent volume to operate.
hive-metastore-db-data
is the main persistent volume claim (PVC) required by default. This PVC is used by the Hive metastore to store metadata about tables, such as table name, columns, and location. Hive metastore is used by Presto and the Hive server to look up table metadata when processing queries. You remove this requirement by using MySQL or PostgreSQL for the Hive metastore database.
To install, Hive metastore requires that dynamic volume provisioning is enabled in a storage class, a persistent volume of the correct size must be manually pre-created, or you use a pre-existing MySQL or PostgreSQL database.
4.4.1.1. Configuring the storage class for the Hive metastore
To configure and specify a storage class for the hive-metastore-db-data
persistent volume claim, specify the storage class in your MeteringConfig
custom resource. An example storage
section with the class
field is included in the metastore-storage.yaml
file below.
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
hive:
spec:
metastore:
storage:
# Default is null, which means using the default storage class if it exists.
# If you wish to use a different storage class, specify it here
# class: "null" 1
size: "5Gi"
- 1
- Uncomment this line and replace
null
with the name of the storage class to use. Leaving the valuenull
will cause metering to use the default storage class for the cluster.
4.4.1.2. Configuring the volume size for the Hive metastore
Use the metastore-storage.yaml
file below as a template to configure the volume size for the Hive metastore.
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
hive:
spec:
metastore:
storage:
# Default is null, which means using the default storage class if it exists.
# If you wish to use a different storage class, specify it here
# class: "null"
size: "5Gi" 1
- 1
- Replace the value for
size
with your desired capacity. The example file shows "5Gi".
4.4.2. Using MySQL or PostgreSQL for the Hive metastore
The default installation of metering configures Hive to use an embedded Java database called Derby. This is unsuited for larger environments and can be replaced with either a MySQL or PostgreSQL database. Use the following example configuration files if your deployment requires a MySQL or PostgreSQL database for Hive.
There are three configuration options you can use to control the database that is used by Hive metastore: url
, driver
, and secretName
.
Create your MySQL or Postgres instance with a user name and password. Then create a secret by using the OpenShift CLI (oc
) or a YAML file. The secretName
you create for this secret must map to the spec.hive.spec.config.db.secretName
field in the MeteringConfig
object resource.
Procedure
Create a secret using the OpenShift CLI (
oc
) or by using a YAML file:Create a secret by using the following command:
$ oc --namespace openshift-metering create secret generic <YOUR_SECRETNAME> --from-literal=username=<YOUR_DATABASE_USERNAME> --from-literal=password=<YOUR_DATABASE_PASSWORD>
Create a secret by using a YAML file. For example:
apiVersion: v1 kind: Secret metadata: name: <YOUR_SECRETNAME> 1 data: username: <BASE64_ENCODED_DATABASE_USERNAME> 2 password: <BASE64_ENCODED_DATABASE_PASSWORD> 3
Create a configuration file to use a MySQL or PostgreSQL database for Hive:
To use a MySQL database for Hive, use the example configuration file below. Metering supports configuring the internal Hive metastore to use the MySQL server versions 5.6, 5.7, and 8.0.
spec: hive: spec: metastore: storage: create: false config: db: url: "jdbc:mysql://mysql.example.com:3306/hive_metastore" 1 driver: "com.mysql.cj.jdbc.Driver" secretName: "REPLACEME" 2
NoteWhen configuring Metering to work with older MySQL server versions, such as 5.6 or 5.7, you might need to add the
enabledTLSProtocols
JDBC URL parameter when configuring the internal Hive metastore.You can pass additional JDBC parameters using the
spec.hive.config.url
. For more details, see the MySQL Connector/J 8.0 documentation.To use a PostgreSQL database for Hive, use the example configuration file below:
spec: hive: spec: metastore: storage: create: false config: db: url: "jdbc:postgresql://postgresql.example.com:5432/hive_metastore" driver: "org.postgresql.Driver" username: "REPLACEME" password: "REPLACEME"
You can pass additional JDBC parameters using the
spec.hive.config.url
. For more details, see the PostgreSQL JDBC driver documentation.
4.5. Configuring the Reporting Operator
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
The Reporting Operator is responsible for collecting data from Prometheus, storing the metrics in Presto, running report queries against Presto, and exposing their results via an HTTP API. Configuring the Reporting Operator is primarily done in your MeteringConfig
custom resource.
4.5.1. Securing a Prometheus connection
When you install metering on OpenShift Container Platform, Prometheus is available at https://prometheus-k8s.openshift-monitoring.svc:9091/.
To secure the connection to Prometheus, the default metering installation uses the OpenShift Container Platform certificate authority (CA). If your Prometheus instance uses a different CA, you can inject the CA through a config map. You can also configure the Reporting Operator to use a specified bearer token to authenticate with Prometheus.
Procedure
Inject the CA that your Prometheus instance uses through a config map. For example:
spec: reporting-operator: spec: config: prometheus: certificateAuthority: useServiceAccountCA: false configMap: enabled: true create: true name: reporting-operator-certificate-authority-config filename: "internal-ca.crt" value: | -----BEGIN CERTIFICATE----- (snip) -----END CERTIFICATE-----
Alternatively, to use the system certificate authorities for publicly valid certificates, set both
useServiceAccountCA
andconfigMap.enabled
tofalse
.- Specify a bearer token to authenticate with Prometheus. For example:
spec: reporting-operator: spec: config: prometheus: metricsImporter: auth: useServiceAccountToken: false tokenSecret: enabled: true create: true value: "abc-123"
4.5.2. Exposing the reporting API
On OpenShift Container Platform the default metering installation automatically exposes a route, making the reporting API available. This provides the following features:
- Automatic DNS
- Automatic TLS based on the cluster CA
Also, the default installation makes it possible to use the OpenShift Container Platform service for serving certificates to protect the reporting API with TLS. The OpenShift Container Platform OAuth proxy is deployed as a sidecar container for the Reporting Operator, which protects the reporting API with authentication.
4.5.2.1. Using OpenShift Container Platform Authentication
By default, the reporting API is secured with TLS and authentication. This is done by configuring the Reporting Operator to deploy a pod containing both the Reporting Operator’s container, and a sidecar container running OpenShift Container Platform auth-proxy.
To access the reporting API, the Metering Operator exposes a route. After that route has been installed, you can run the following command to get the route’s hostname.
$ METERING_ROUTE_HOSTNAME=$(oc -n openshift-metering get routes metering -o json | jq -r '.status.ingress[].host')
Next, set up authentication using either a service account token or basic authentication with a username and password.
4.5.2.1.1. Authenticate using a service account token
With this method, you use the token in the Reporting Operator’s service account, and pass that bearer token to the Authorization header in the following command:
$ TOKEN=$(oc -n openshift-metering serviceaccounts get-token reporting-operator) curl -H "Authorization: Bearer $TOKEN" -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]"
Be sure to replace the name=[Report Name]
and format=[Format]
parameters in the URL above. The format
parameter can be json, csv, or tabular.
4.5.2.1.2. Authenticate using a username and password
Metering supports configuring basic authentication using a username and password combination, which is specified in the contents of an htpasswd file. By default, a secret containing empty htpasswd data is created. You can, however, configure the reporting-operator.spec.authProxy.htpasswd.data
and reporting-operator.spec.authProxy.htpasswd.createSecret
keys to use this method.
Once you have specified the above in your MeteringConfig
resource, you can run the following command:
$ curl -u testuser:password123 -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]"
Be sure to replace testuser:password123
with a valid username and password combination.
4.5.2.2. Manually Configuring Authentication
To manually configure, or disable OAuth in the Reporting Operator, you must set spec.tls.enabled: false
in your MeteringConfig
resource.
This also disables all TLS and authentication between the Reporting Operator, Presto, and Hive. You would need to manually configure these resources yourself.
Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the OpenShift Container Platform auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API isn’t exposed directly, but instead is proxied to via the auth-proxy sidecar container.
-
reporting-operator.spec.authProxy.enabled
-
reporting-operator.spec.authProxy.cookie.createSecret
-
reporting-operator.spec.authProxy.cookie.seed
You need to set reporting-operator.spec.authProxy.enabled
and reporting-operator.spec.authProxy.cookie.createSecret
to true
and reporting-operator.spec.authProxy.cookie.seed
to a 32-character random string.
You can generate a 32-character random string using the following command.
$ openssl rand -base64 32 | head -c32; echo.
4.5.2.2.1. Token authentication
When the following options are set to true
, authentication using a bearer token is enabled for the reporting REST API. Bearer tokens can come from service accounts or users.
-
reporting-operator.spec.authProxy.subjectAccessReview.enabled
-
reporting-operator.spec.authProxy.delegateURLs.enabled
When authentication is enabled, the Bearer token used to query the reporting API of the user or service account must be granted access using one of the following roles:
- report-exporter
- reporting-admin
- reporting-viewer
- metering-admin
- metering-viewer
The Metering Operator is capable of creating role bindings for you, granting these permissions by specifying a list of subjects in the spec.permissions
section. For an example, see the following advanced-auth.yaml
example configuration.
apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: permissions: # anyone in the "metering-admins" group can create, update, delete, etc any # metering.openshift.io resources in the namespace. # This also grants permissions to get query report results from the reporting REST API. meteringAdmins: - kind: Group name: metering-admins # Same as above except read only access and for the metering-viewers group. meteringViewers: - kind: Group name: metering-viewers # the default serviceaccount in the namespace "my-custom-ns" can: # create, update, delete, etc reports. # This also gives permissions query the results from the reporting REST API. reportingAdmins: - kind: ServiceAccount name: default namespace: my-custom-ns # anyone in the group reporting-readers can get, list, watch reports, and # query report results from the reporting REST API. reportingViewers: - kind: Group name: reporting-readers # anyone in the group cluster-admins can query report results # from the reporting REST API. So can the user bob-from-accounting. reportExporters: - kind: Group name: cluster-admins - kind: User name: bob-from-accounting reporting-operator: spec: authProxy: # htpasswd.data can contain htpasswd file contents for allowing auth # using a static list of usernames and their password hashes. # # username is 'testuser' password is 'password123' # generated htpasswdData using: `htpasswd -nb -s testuser password123` # htpasswd: # data: | # testuser:{SHA}y/2sYAj5yrQIN4TL0YdPdmGNKpc= # # change REPLACEME to the output of your htpasswd command htpasswd: data: | REPLACEME
Alternatively, you can use any role which has rules granting get
permissions to reports/export
. This means get
access to the export
sub-resource of the Report
resources in the namespace of the Reporting Operator. For example: admin
and cluster-admin
.
By default, the Reporting Operator and Metering Operator service accounts both have these permissions, and their tokens can be used for authentication.
4.5.2.2.2. Basic authentication with a username and password
For basic authentication you can supply a username and password in the reporting-operator.spec.authProxy.htpasswd.data
field. The username and password must be the same format as those found in an htpasswd file. When set, you can use HTTP basic authentication to provide your username and password that has a corresponding entry in the htpasswdData
contents.
4.6. Configure AWS billing correlation
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Metering can correlate cluster usage information with AWS detailed billing information, attaching a dollar amount to resource usage. For clusters running in EC2, you can enable this by modifying the example aws-billing.yaml
file below.
apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: openshift-reporting: spec: awsBillingReportDataSource: enabled: true # Replace these with where your AWS billing reports are # stored in S3. bucket: "<your-aws-cost-report-bucket>" 1 prefix: "<path/to/report>" region: "<your-buckets-region>" reporting-operator: spec: config: aws: secretName: "<your-aws-secret>" 2 presto: spec: config: aws: secretName: "<your-aws-secret>" 3 hive: spec: config: aws: secretName: "<your-aws-secret>" 4
To enable AWS billing correlation, first ensure the AWS Cost and Usage Reports are enabled. For more information, see Turning on the AWS Cost and Usage Report in the AWS documentation.
- 1
- Update the bucket, prefix, and region to the location of your AWS Detailed billing report.
- 2 3 4
- All
secretName
fields should be set to the name of a secret in the metering namespace containing AWS credentials in thedata.aws-access-key-id
anddata.aws-secret-access-key
fields. See the example secret file below for more details.
apiVersion: v1 kind: Secret metadata: name: <your-aws-secret> data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg=="
To store data in S3, the aws-access-key-id
and aws-secret-access-key
credentials must have read and write access to the bucket. For an example of an IAM policy granting the required permissions, see the aws/read-write.json
file below.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:HeadBucket", "s3:ListBucket", "s3:ListMultipartUploadParts", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::operator-metering-data/*", 1 "arn:aws:s3:::operator-metering-data" 2 ] } ] } { "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:HeadBucket", "s3:ListBucket", "s3:ListMultipartUploadParts", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::operator-metering-data/*", 3 "arn:aws:s3:::operator-metering-data" 4 ] } ] }
This can be done either pre-installation or post-installation. Disabling it post-installation can cause errors in the Reporting Operator.