Este contenido no está disponible en el idioma seleccionado.
Chapter 4. Configuring metering
4.1. About configuring metering Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
The
MeteringConfig
MeteringConfig
- At a minimum, you need to configure persistent storage and configure the Hive metastore.
- Most default configuration settings work, but larger deployments or highly customized deployments should review all configuration options carefully.
- Some configuration options can not be modified after installation.
For configuration options that can be modified after installation, make the changes in your
MeteringConfig
4.2. Common configuration options Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
4.2.1. Resource requests and limits Copiar enlaceEnlace copiado en el portapapeles!
You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The
default-resource-limits.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
resources:
limits:
cpu: 1
memory: 500Mi
requests:
cpu: 500m
memory: 100Mi
presto:
spec:
coordinator:
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
worker:
replicas: 0
resources:
limits:
cpu: 8
memory: 8Gi
requests:
cpu: 4
memory: 2Gi
hive:
spec:
metastore:
resources:
limits:
cpu: 4
memory: 2Gi
requests:
cpu: 500m
memory: 650Mi
storage:
class: null
create: true
size: 5Gi
server:
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
4.2.2. Node selectors Copiar enlaceEnlace copiado en el portapapeles!
You can run the metering components on specific sets of nodes. Set the
nodeSelector
node-selectors.yaml
Add the
openshift.io/node-selector: ""
""
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
nodeSelector:
"node-role.kubernetes.io/infra": ""
presto:
spec:
coordinator:
nodeSelector:
"node-role.kubernetes.io/infra": ""
worker:
nodeSelector:
"node-role.kubernetes.io/infra": ""
hive:
spec:
metastore:
nodeSelector:
"node-role.kubernetes.io/infra": ""
server:
nodeSelector:
"node-role.kubernetes.io/infra": ""
Add the
openshift.io/node-selector: ""
openshift.io/node-selector
spec.defaultNodeSelector
Scheduler
Verification
You can verify the metering node selectors by performing any of the following checks:
Verify that all pods for metering are correctly scheduled on the IP of the node that is configured in the
custom resource:MeteringConfigCheck all pods in the
namespace:openshift-metering$ oc --namespace openshift-metering get pods -o wideThe output shows the
and correspondingNODEfor each pod running in theIPnamespace.openshift-meteringExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hive-metastore-0 1/2 Running 0 4m33s 10.129.2.26 ip-10-0-210-167.us-east-2.compute.internal <none> <none> hive-server-0 2/3 Running 0 4m21s 10.128.2.26 ip-10-0-150-175.us-east-2.compute.internal <none> <none> metering-operator-964b4fb55-4p699 2/2 Running 0 7h30m 10.131.0.33 ip-10-0-189-6.us-east-2.compute.internal <none> <none> nfs-server 1/1 Running 0 7h30m 10.129.2.24 ip-10-0-210-167.us-east-2.compute.internal <none> <none> presto-coordinator-0 2/2 Running 0 4m8s 10.131.0.35 ip-10-0-189-6.us-east-2.compute.internal <none> <none> reporting-operator-869b854c78-8g2x5 1/2 Running 0 7h27m 10.128.2.25 ip-10-0-150-175.us-east-2.compute.internal <none> <none>Compare the nodes in the
namespace to each nodeopenshift-meteringin your cluster:NAME$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.21.0+6025c28 ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.21.0+6025c28 ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.21.0+6025c28 ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.21.0+6025c28 ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.21.0+6025c28 ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.21.0+6025c28
Verify that the node selector configuration in the
custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand pods are scheduled.MeteringConfigCheck the cluster-wide
object for theSchedulerfield, which shows where pods are scheduled by default:spec.defaultNodeSelector$ oc get schedulers.config.openshift.io cluster -o yaml
4.3. Configuring persistent storage Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Metering requires persistent storage to persist data collected by the Metering Operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.
4.3.1. Storing data in Amazon S3 Copiar enlaceEnlace copiado en el portapapeles!
Metering can use an existing Amazon S3 bucket or create a bucket for storage.
Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data.
Procedure
Edit the
section in thespec.storagefile:s3-storage.yamlExample
s3-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3" s3: bucket: "bucketname/path/"1 region: "us-west-1"2 secretName: "my-aws-secret"3 # Set to false if you want to provide an existing bucket, instead of # having metering create the bucket on your behalf. createBucket: true4 - 1
- Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket.
- 2
- Specify the region of your bucket.
- 3
- The name of a secret in the metering namespace containing the AWS credentials in the
data.aws-access-key-idanddata.aws-secret-access-keyfields. See the exampleSecretobject below for more details. - 4
- Set this field to
falseif you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that haveCreateBucketpermissions.
Use the following
object as a template:SecretExample AWS
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg=="NoteThe values of the
andaws-access-key-idmust be base64 encoded.aws-secret-access-keyCreate the secret:
$ oc create secret -n openshift-metering generic my-aws-secret \ --from-literal=aws-access-key-id=my-access-key \ --from-literal=aws-secret-access-key=my-secret-keyNoteThis command automatically base64 encodes your
andaws-access-key-idvalues.aws-secret-access-key
The
aws-access-key-id
aws-secret-access-key
aws/read-write.json
Example aws/read-write.json file
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
If
spec.storage.hive.s3.createBucket
true
s3-storage.yaml
aws/read-write-create.json
Example aws/read-write-create.json file
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
4.3.2. Storing data in S3-compatible storage Copiar enlaceEnlace copiado en el portapapeles!
You can use S3-compatible storage such as Noobaa.
Procedure
Edit the
section in thespec.storagefile:s3-compatible-storage.yamlExample
s3-compatible-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3Compatible" s3Compatible: bucket: "bucketname"1 endpoint: "http://example:port-number"2 secretName: "my-aws-secret"3 Use the following
object as a template:SecretExample S3-compatible
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg=="
4.3.3. Storing data in Microsoft Azure Copiar enlaceEnlace copiado en el portapapeles!
To store data in Azure blob storage, you must use an existing container.
Procedure
Edit the
section in thespec.storagefile:azure-blob-storage.yamlExample
azure-blob-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "azure" azure: container: "bucket1"1 secretName: "my-azure-secret"2 rootDirectory: "/testDir"3 Use the following
object as a template:SecretExample Azure
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-azure-secret data: azure-storage-account-name: "dGVzdAo=" azure-secret-access-key: "c2VjcmV0Cg=="Create the secret:
$ oc create secret -n openshift-metering generic my-azure-secret \ --from-literal=azure-storage-account-name=my-storage-account-name \ --from-literal=azure-secret-access-key=my-secret-key
4.3.4. Storing data in Google Cloud Storage Copiar enlaceEnlace copiado en el portapapeles!
To store your data in Google Cloud Storage, you must use an existing bucket.
Procedure
Edit the
section in thespec.storagefile:gcs-storage.yamlExample
gcs-storage.yamlfileapiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "gcs" gcs: bucket: "metering-gcs/test1"1 secretName: "my-gcs-secret"2 Use the following
object as a template:SecretExample Google Cloud Storage
SecretobjectapiVersion: v1 kind: Secret metadata: name: my-gcs-secret data: gcs-service-account.json: "c2VjcmV0Cg=="Create the secret:
$ oc create secret -n openshift-metering generic my-gcs-secret \ --from-file gcs-service-account.json=/path/to/my/service-account-key.json
4.4. Configuring the Hive metastore Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Hive metastore is responsible for storing all the metadata about the database tables created in Presto and Hive. By default, the metastore stores this information in a local embedded Derby database in a persistent volume attached to the pod.
Generally, the default configuration of the Hive metastore works for small clusters, but users may wish to improve performance or move storage requirements out of cluster by using a dedicated SQL database for storing the Hive metastore data.
4.4.1. Configuring persistent volumes Copiar enlaceEnlace copiado en el portapapeles!
By default, Hive requires one persistent volume to operate.
hive-metastore-db-data
To install, Hive metastore requires that dynamic volume provisioning is enabled in a storage class, a persistent volume of the correct size must be manually pre-created, or you use a pre-existing MySQL or PostgreSQL database.
4.4.1.1. Configuring the storage class for the Hive metastore Copiar enlaceEnlace copiado en el portapapeles!
To configure and specify a storage class for the
hive-metastore-db-data
MeteringConfig
storage
class
metastore-storage.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
hive:
spec:
metastore:
storage:
# Default is null, which means using the default storage class if it exists.
# If you wish to use a different storage class, specify it here
# class: "null"
size: "5Gi"
- 1
- Uncomment this line and replace
nullwith the name of the storage class to use. Leaving the valuenullwill cause metering to use the default storage class for the cluster.
4.4.1.2. Configuring the volume size for the Hive metastore Copiar enlaceEnlace copiado en el portapapeles!
Use the
metastore-storage.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
hive:
spec:
metastore:
storage:
# Default is null, which means using the default storage class if it exists.
# If you wish to use a different storage class, specify it here
# class: "null"
size: "5Gi"
- 1
- Replace the value for
sizewith your desired capacity. The example file shows "5Gi".
4.4.2. Using MySQL or PostgreSQL for the Hive metastore Copiar enlaceEnlace copiado en el portapapeles!
The default installation of metering configures Hive to use an embedded Java database called Derby. This is unsuited for larger environments and can be replaced with either a MySQL or PostgreSQL database. Use the following example configuration files if your deployment requires a MySQL or PostgreSQL database for Hive.
There are three configuration options you can use to control the database that is used by Hive metastore:
url
driver
secretName
Create your MySQL or Postgres instance with a user name and password. Then create a secret by using the OpenShift CLI (
oc
secretName
spec.hive.spec.config.db.secretName
MeteringConfig
Procedure
Create a secret using the OpenShift CLI (
) or by using a YAML file:ocCreate a secret by using the following command:
$ oc --namespace openshift-metering create secret generic <YOUR_SECRETNAME> --from-literal=username=<YOUR_DATABASE_USERNAME> --from-literal=password=<YOUR_DATABASE_PASSWORD>Create a secret by using a YAML file. For example:
apiVersion: v1 kind: Secret metadata: name: <YOUR_SECRETNAME>1 data: username: <BASE64_ENCODED_DATABASE_USERNAME>2 password: <BASE64_ENCODED_DATABASE_PASSWORD>3
Create a configuration file to use a MySQL or PostgreSQL database for Hive:
To use a MySQL database for Hive, use the example configuration file below. Metering supports configuring the internal Hive metastore to use the MySQL server versions 5.6, 5.7, and 8.0.
spec: hive: spec: metastore: storage: create: false config: db: url: "jdbc:mysql://mysql.example.com:3306/hive_metastore"1 driver: "com.mysql.cj.jdbc.Driver" secretName: "REPLACEME"2 NoteWhen configuring Metering to work with older MySQL server versions, such as 5.6 or 5.7, you might need to add the
enabledTLSProtocolsJDBC URL parameter when configuring the internal Hive metastore.You can pass additional JDBC parameters using the
. For more details, see the MySQL Connector/J 8.0 documentation.spec.hive.config.urlTo use a PostgreSQL database for Hive, use the example configuration file below:
spec: hive: spec: metastore: storage: create: false config: db: url: "jdbc:postgresql://postgresql.example.com:5432/hive_metastore" driver: "org.postgresql.Driver" username: "REPLACEME" password: "REPLACEME"You can pass additional JDBC parameters using the
. For more details, see the PostgreSQL JDBC driver documentation.spec.hive.config.url
4.5. Configuring the Reporting Operator Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
The Reporting Operator is responsible for collecting data from Prometheus, storing the metrics in Presto, running report queries against Presto, and exposing their results via an HTTP API. Configuring the Reporting Operator is primarily done in your
MeteringConfig
4.5.1. Securing a Prometheus connection Copiar enlaceEnlace copiado en el portapapeles!
When you install metering on OpenShift Container Platform, Prometheus is available at https://prometheus-k8s.openshift-monitoring.svc:9091/.
To secure the connection to Prometheus, the default metering installation uses the OpenShift Container Platform certificate authority (CA). If your Prometheus instance uses a different CA, you can inject the CA through a config map. You can also configure the Reporting Operator to use a specified bearer token to authenticate with Prometheus.
Procedure
Inject the CA that your Prometheus instance uses through a config map. For example:
spec: reporting-operator: spec: config: prometheus: certificateAuthority: useServiceAccountCA: false configMap: enabled: true create: true name: reporting-operator-certificate-authority-config filename: "internal-ca.crt" value: | -----BEGIN CERTIFICATE----- (snip) -----END CERTIFICATE-----Alternatively, to use the system certificate authorities for publicly valid certificates, set both
anduseServiceAccountCAtoconfigMap.enabled.false- Specify a bearer token to authenticate with Prometheus. For example:
spec:
reporting-operator:
spec:
config:
prometheus:
metricsImporter:
auth:
useServiceAccountToken: false
tokenSecret:
enabled: true
create: true
value: "abc-123"
4.5.2. Exposing the reporting API Copiar enlaceEnlace copiado en el portapapeles!
On OpenShift Container Platform the default metering installation automatically exposes a route, making the reporting API available. This provides the following features:
- Automatic DNS
- Automatic TLS based on the cluster CA
Also, the default installation makes it possible to use the OpenShift Container Platform service for serving certificates to protect the reporting API with TLS. The OpenShift Container Platform OAuth proxy is deployed as a sidecar container for the Reporting Operator, which protects the reporting API with authentication.
4.5.2.1. Using OpenShift Container Platform Authentication Copiar enlaceEnlace copiado en el portapapeles!
By default, the reporting API is secured with TLS and authentication. This is done by configuring the Reporting Operator to deploy a pod containing both the Reporting Operator’s container, and a sidecar container running OpenShift Container Platform auth-proxy.
To access the reporting API, the Metering Operator exposes a route. After that route has been installed, you can run the following command to get the route’s hostname.
$ METERING_ROUTE_HOSTNAME=$(oc -n openshift-metering get routes metering -o json | jq -r '.status.ingress[].host')
Next, set up authentication using either a service account token or basic authentication with a username and password.
4.5.2.1.1. Authenticate using a service account token Copiar enlaceEnlace copiado en el portapapeles!
With this method, you use the token in the Reporting Operator’s service account, and pass that bearer token to the Authorization header in the following command:
$ TOKEN=$(oc -n openshift-metering serviceaccounts get-token reporting-operator)
curl -H "Authorization: Bearer $TOKEN" -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]"
Be sure to replace the
name=[Report Name]
format=[Format]
format
4.5.2.1.2. Authenticate using a username and password Copiar enlaceEnlace copiado en el portapapeles!
Metering supports configuring basic authentication using a username and password combination, which is specified in the contents of an htpasswd file. By default, a secret containing empty htpasswd data is created. You can, however, configure the
reporting-operator.spec.authProxy.htpasswd.data
reporting-operator.spec.authProxy.htpasswd.createSecret
Once you have specified the above in your
MeteringConfig
$ curl -u testuser:password123 -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]"
Be sure to replace
testuser:password123
4.5.2.2. Manually Configuring Authentication Copiar enlaceEnlace copiado en el portapapeles!
To manually configure, or disable OAuth in the Reporting Operator, you must set
spec.tls.enabled: false
MeteringConfig
This also disables all TLS and authentication between the Reporting Operator, Presto, and Hive. You would need to manually configure these resources yourself.
Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the OpenShift Container Platform auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API isn’t exposed directly, but instead is proxied to via the auth-proxy sidecar container.
-
reporting-operator.spec.authProxy.enabled -
reporting-operator.spec.authProxy.cookie.createSecret -
reporting-operator.spec.authProxy.cookie.seed
You need to set
reporting-operator.spec.authProxy.enabled
reporting-operator.spec.authProxy.cookie.createSecret
true
reporting-operator.spec.authProxy.cookie.seed
You can generate a 32-character random string using the following command.
$ openssl rand -base64 32 | head -c32; echo.
4.5.2.2.1. Token authentication Copiar enlaceEnlace copiado en el portapapeles!
When the following options are set to
true
-
reporting-operator.spec.authProxy.subjectAccessReview.enabled -
reporting-operator.spec.authProxy.delegateURLs.enabled
When authentication is enabled, the Bearer token used to query the reporting API of the user or service account must be granted access using one of the following roles:
- report-exporter
- reporting-admin
- reporting-viewer
- metering-admin
- metering-viewer
The Metering Operator is capable of creating role bindings for you, granting these permissions by specifying a list of subjects in the
spec.permissions
advanced-auth.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
permissions:
# anyone in the "metering-admins" group can create, update, delete, etc any
# metering.openshift.io resources in the namespace.
# This also grants permissions to get query report results from the reporting REST API.
meteringAdmins:
- kind: Group
name: metering-admins
# Same as above except read only access and for the metering-viewers group.
meteringViewers:
- kind: Group
name: metering-viewers
# the default serviceaccount in the namespace "my-custom-ns" can:
# create, update, delete, etc reports.
# This also gives permissions query the results from the reporting REST API.
reportingAdmins:
- kind: ServiceAccount
name: default
namespace: my-custom-ns
# anyone in the group reporting-readers can get, list, watch reports, and
# query report results from the reporting REST API.
reportingViewers:
- kind: Group
name: reporting-readers
# anyone in the group cluster-admins can query report results
# from the reporting REST API. So can the user bob-from-accounting.
reportExporters:
- kind: Group
name: cluster-admins
- kind: User
name: bob-from-accounting
reporting-operator:
spec:
authProxy:
# htpasswd.data can contain htpasswd file contents for allowing auth
# using a static list of usernames and their password hashes.
#
# username is 'testuser' password is 'password123'
# generated htpasswdData using: `htpasswd -nb -s testuser password123`
# htpasswd:
# data: |
# testuser:{SHA}y/2sYAj5yrQIN4TL0YdPdmGNKpc=
#
# change REPLACEME to the output of your htpasswd command
htpasswd:
data: |
REPLACEME
Alternatively, you can use any role which has rules granting
get
reports/export
get
export
Report
admin
cluster-admin
By default, the Reporting Operator and Metering Operator service accounts both have these permissions, and their tokens can be used for authentication.
4.5.2.2.2. Basic authentication with a username and password Copiar enlaceEnlace copiado en el portapapeles!
For basic authentication you can supply a username and password in the
reporting-operator.spec.authProxy.htpasswd.data
htpasswdData
4.6. Configure AWS billing correlation Copiar enlaceEnlace copiado en el portapapeles!
Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
Metering can correlate cluster usage information with AWS detailed billing information, attaching a dollar amount to resource usage. For clusters running in EC2, you can enable this by modifying the example
aws-billing.yaml
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
openshift-reporting:
spec:
awsBillingReportDataSource:
enabled: true
# Replace these with where your AWS billing reports are
# stored in S3.
bucket: "<your-aws-cost-report-bucket>"
prefix: "<path/to/report>"
region: "<your-buckets-region>"
reporting-operator:
spec:
config:
aws:
secretName: "<your-aws-secret>"
presto:
spec:
config:
aws:
secretName: "<your-aws-secret>"
hive:
spec:
config:
aws:
secretName: "<your-aws-secret>"
To enable AWS billing correlation, first ensure the AWS Cost and Usage Reports are enabled. For more information, see Turning on the AWS Cost and Usage Report in the AWS documentation.
- 1
- Update the bucket, prefix, and region to the location of your AWS Detailed billing report.
- 2 3 4
- All
secretNamefields should be set to the name of a secret in the metering namespace containing AWS credentials in thedata.aws-access-key-idanddata.aws-secret-access-keyfields. See the example secret file below for more details.
apiVersion: v1
kind: Secret
metadata:
name: <your-aws-secret>
data:
aws-access-key-id: "dGVzdAo="
aws-secret-access-key: "c2VjcmV0Cg=="
To store data in S3, the
aws-access-key-id
aws-secret-access-key
aws/read-write.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*",
"arn:aws:s3:::operator-metering-data"
]
}
]
}
This can be done either pre-installation or post-installation. Disabling it post-installation can cause errors in the Reporting Operator.