Chapter 4. Configuring metering


4.1. About configuring metering

The MeteringConfig custom resource specifies all the configuration details for your metering installation. When you first install the metering stack, a default MeteringConfig custom resource is generated. Use the examples in the documentation to modify this default file. Keep in mind the following key points:

  • At a minimum, you need to configure persistent storage and configure the Hive metastore.
  • Most default configuration settings work, but larger deployments or highly customized deployments should review all configuration options carefully.
  • Some configuration options can not be modified after installation.

For configuration options that can be modified after installation, make the changes in your MeteringConfig custom resource and reapply the file.

4.2. Common configuration options

4.2.1. Resource requests and limits

You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The default-resource-limits.yaml below provides an example of setting resource request and limits for each component.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  reporting-operator:
    spec:
      resources:
        limits:
          cpu: 1
          memory: 500Mi
        requests:
          cpu: 500m
          memory: 100Mi
  presto:
    spec:
      coordinator:
        resources:
          limits:
            cpu: 4
            memory: 4Gi
          requests:
            cpu: 2
            memory: 2Gi

      worker:
        replicas: 0
        resources:
          limits:
            cpu: 8
            memory: 8Gi
          requests:
            cpu: 4
            memory: 2Gi

  hive:
    spec:
      metastore:
        resources:
          limits:
            cpu: 4
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 650Mi
        storage:
          class: null
          create: true
          size: 5Gi
      server:
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 500Mi

4.2.2. Node selectors

You can run the metering components on specific sets of nodes. Set the nodeSelector on a metering component to control where the component is scheduled. The node-selectors.yaml file below provides an example of setting node selectors for each component.

Note

Add the openshift.io/node-selector: "" namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. Specify "" as the annotation value.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  reporting-operator:
    spec:
      nodeSelector:
        "node-role.kubernetes.io/infra": "" 1

  presto:
    spec:
      coordinator:
        nodeSelector:
          "node-role.kubernetes.io/infra": "" 2
      worker:
        nodeSelector:
          "node-role.kubernetes.io/infra": "" 3
  hive:
    spec:
      metastore:
        nodeSelector:
          "node-role.kubernetes.io/infra": "" 4
      server:
        nodeSelector:
          "node-role.kubernetes.io/infra": "" 5
1 2 3 4 5
Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use key-value pairs, based on the value specified for the node.
Note

Add the openshift.io/node-selector: "" namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. When the openshift.io/node-selector annotation is set on the project, the value is used in preference to the value of the spec.defaultNodeSelector field in the cluster-wide Scheduler object.

Verification

You can verify the metering node selectors by performing any of the following checks:

  • Verify that all pods for metering are correctly scheduled on the IP of the node that is configured in the MeteringConfig custom resource:

    1. Check all pods in the openshift-metering namespace:

      $ oc --namespace openshift-metering get pods -o wide

      The output shows the NODE and corresponding IP for each pod running in the openshift-metering namespace.

      Example output

      NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE                                         NOMINATED NODE   READINESS GATES
      hive-metastore-0                      1/2     Running   0          4m33s   10.129.2.26   ip-10-0-210-167.us-east-2.compute.internal   <none>           <none>
      hive-server-0                         2/3     Running   0          4m21s   10.128.2.26   ip-10-0-150-175.us-east-2.compute.internal   <none>           <none>
      metering-operator-964b4fb55-4p699     2/2     Running   0          7h30m   10.131.0.33   ip-10-0-189-6.us-east-2.compute.internal     <none>           <none>
      nfs-server                            1/1     Running   0          7h30m   10.129.2.24   ip-10-0-210-167.us-east-2.compute.internal   <none>           <none>
      presto-coordinator-0                  2/2     Running   0          4m8s    10.131.0.35   ip-10-0-189-6.us-east-2.compute.internal     <none>           <none>
      reporting-operator-869b854c78-8g2x5   1/2     Running   0          7h27m   10.128.2.25   ip-10-0-150-175.us-east-2.compute.internal   <none>           <none>

    2. Compare the nodes in the openshift-metering namespace to each node NAME in your cluster:

      $ oc get nodes

      Example output

      NAME                                         STATUS   ROLES    AGE   VERSION
      ip-10-0-147-106.us-east-2.compute.internal   Ready    master   14h   v1.18.3+6025c28
      ip-10-0-150-175.us-east-2.compute.internal   Ready    worker   14h   v1.18.3+6025c28
      ip-10-0-175-23.us-east-2.compute.internal    Ready    master   14h   v1.18.3+6025c28
      ip-10-0-189-6.us-east-2.compute.internal     Ready    worker   14h   v1.18.3+6025c28
      ip-10-0-205-158.us-east-2.compute.internal   Ready    master   14h   v1.18.3+6025c28
      ip-10-0-210-167.us-east-2.compute.internal   Ready    worker   14h   v1.18.3+6025c28

  • Verify that the node selector configuration in the MeteringConfig custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand pods are scheduled.

    • Check the cluster-wide Scheduler object for the spec.defaultNodeSelector field, which shows where pods are scheduled by default:

      $ oc get schedulers.config.openshift.io cluster -o yaml

4.3. Configuring persistent storage

Metering requires persistent storage to persist data collected by the Metering Operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.

4.3.1. Storing data in Amazon S3

Metering can use an existing Amazon S3 bucket or create a bucket for storage.

Note

Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data.

Procedure

  1. Edit the spec.storage section in the s3-storage.yaml file:

    Example s3-storage.yaml file

    apiVersion: metering.openshift.io/v1
    kind: MeteringConfig
    metadata:
      name: "operator-metering"
    spec:
      storage:
        type: "hive"
        hive:
          type: "s3"
          s3:
            bucket: "bucketname/path/" 1
            region: "us-west-1" 2
            secretName: "my-aws-secret" 3
            # Set to false if you want to provide an existing bucket, instead of
            # having metering create the bucket on your behalf.
            createBucket: true 4

    1
    Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket.
    2
    Specify the region of your bucket.
    3
    The name of a secret in the metering namespace containing the AWS credentials in the data.aws-access-key-id and data.aws-secret-access-key fields. See the example Secret object below for more details.
    4
    Set this field to false if you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that have CreateBucket permissions.
  2. Use the following Secret object as a template:

    Example AWS Secret object

    apiVersion: v1
    kind: Secret
    metadata:
      name: my-aws-secret
    data:
      aws-access-key-id: "dGVzdAo="
      aws-secret-access-key: "c2VjcmV0Cg=="

    Note

    The values of the aws-access-key-id and aws-secret-access-key must be base64 encoded.

  3. Create the secret:

    $ oc create secret -n openshift-metering generic my-aws-secret \
      --from-literal=aws-access-key-id=my-access-key \
      --from-literal=aws-secret-access-key=my-secret-key
    Note

    This command automatically base64 encodes your aws-access-key-id and aws-secret-access-key values.

The aws-access-key-id and aws-secret-access-key credentials must have read and write access to the bucket. The following aws/read-write.json file shows an IAM policy that grants the required permissions:

Example aws/read-write.json file

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:HeadBucket",
                "s3:ListBucket",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::operator-metering-data/*",
                "arn:aws:s3:::operator-metering-data"
            ]
        }
    ]
}

If spec.storage.hive.s3.createBucket is set to true or unset in your s3-storage.yaml file, then you should use the aws/read-write-create.json file that contains permissions for creating and deleting buckets:

Example aws/read-write-create.json file

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:HeadBucket",
                "s3:ListBucket",
                "s3:CreateBucket",
                "s3:DeleteBucket",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::operator-metering-data/*",
                "arn:aws:s3:::operator-metering-data"
            ]
        }
    ]
}

4.3.2. Storing data in S3-compatible storage

You can use S3-compatible storage such as Noobaa.

Procedure

  1. Edit the spec.storage section in the s3-compatible-storage.yaml file:

    Example s3-compatible-storage.yaml file

    apiVersion: metering.openshift.io/v1
    kind: MeteringConfig
    metadata:
      name: "operator-metering"
    spec:
      storage:
        type: "hive"
        hive:
          type: "s3Compatible"
          s3Compatible:
            bucket: "bucketname" 1
            endpoint: "http://example:port-number" 2
            secretName: "my-aws-secret" 3

    1
    Specify the name of your S3-compatible bucket.
    2
    Specify the endpoint for your storage.
    3
    The name of a secret in the metering namespace containing the AWS credentials in the data.aws-access-key-id and data.aws-secret-access-key fields. See the example Secret object below for more details.
  2. Use the following Secret object as a template:

    Example S3-compatible Secret object

    apiVersion: v1
    kind: Secret
    metadata:
      name: my-aws-secret
    data:
      aws-access-key-id: "dGVzdAo="
      aws-secret-access-key: "c2VjcmV0Cg=="

4.3.3. Storing data in Microsoft Azure

To store data in Azure blob storage, you must use an existing container.

Procedure

  1. Edit the spec.storage section in the azure-blob-storage.yaml file:

    Example azure-blob-storage.yaml file

    apiVersion: metering.openshift.io/v1
    kind: MeteringConfig
    metadata:
      name: "operator-metering"
    spec:
      storage:
        type: "hive"
        hive:
          type: "azure"
          azure:
            container: "bucket1" 1
            secretName: "my-azure-secret" 2
            rootDirectory: "/testDir" 3

    1
    Specify the container name.
    2
    Specify a secret in the metering namespace. See the example Secret object below for more details.
    3
    Optional: Specify the directory where you would like to store your data.
  2. Use the following Secret object as a template:

    Example Azure Secret object

    apiVersion: v1
    kind: Secret
    metadata:
      name: my-azure-secret
    data:
      azure-storage-account-name: "dGVzdAo="
      azure-secret-access-key: "c2VjcmV0Cg=="

  3. Create the secret:

    $ oc create secret -n openshift-metering generic my-azure-secret \
      --from-literal=azure-storage-account-name=my-storage-account-name \
      --from-literal=azure-secret-access-key=my-secret-key

4.3.4. Storing data in Google Cloud Storage

To store your data in Google Cloud Storage, you must use an existing bucket.

Procedure

  1. Edit the spec.storage section in the gcs-storage.yaml file:

    Example gcs-storage.yaml file

    apiVersion: metering.openshift.io/v1
    kind: MeteringConfig
    metadata:
      name: "operator-metering"
    spec:
      storage:
        type: "hive"
        hive:
          type: "gcs"
          gcs:
            bucket: "metering-gcs/test1" 1
            secretName: "my-gcs-secret" 2

    1
    Specify the name of the bucket. You can optionally specify the directory within the bucket where you would like to store your data.
    2
    Specify a secret in the metering namespace. See the example Secret object below for more details.
  2. Use the following Secret object as a template:

    Example Google Cloud Storage Secret object

    apiVersion: v1
    kind: Secret
    metadata:
      name: my-gcs-secret
    data:
      gcs-service-account.json: "c2VjcmV0Cg=="

  3. Create the secret:

    $ oc create secret -n openshift-metering generic my-gcs-secret \
      --from-file gcs-service-account.json=/path/to/my/service-account-key.json

4.3.5. Storing data in shared volumes

Metering does not configure storage by default. However, you can use any ReadWriteMany persistent volume (PV) or any storage class that provisions a ReadWriteMany PV for metering storage.

Note

NFS is not recommended to use in production. Using an NFS server on RHEL as a storage back end can fail to meet metering requirements and to provide the performance that is needed for the Metering Operator to work appropriately.

Other NFS implementations on the marketplace might not have these issues, such as a Parallel Network File System (pNFS). pNFS is an NFS implementation with distributed and parallel capability. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against OpenShift Container Platform core components.

Procedure

  1. Modify the shared-storage.yaml file to use a ReadWriteMany persistent volume for storage:

    apiVersion: metering.openshift.io/v1
    kind: MeteringConfig
    metadata:
      name: "operator-metering"
    spec:
      storage:
        type: "hive"
        hive:
          type: "sharedPVC"
          sharedPVC:
            claimName: "metering-nfs" 1
            # Uncomment the lines below to provision a new PVC using the specified storageClass. 2
            # createPVC: true
            # storageClass: "my-nfs-storage-class"
            # size: 5Gi

    Select one of the configuration options below:

    1
    Set storage.hive.sharedPVC.claimName to the name of an existing ReadWriteMany persistent volume claim (PVC). This configuration is necessary if you do not have dynamic volume provisioning or want to have more control over how the persistent volume is created.
    2
    Set storage.hive.sharedPVC.createPVC to true and set the storage.hive.sharedPVC.storageClass to the name of a storage class with ReadWriteMany access mode. This configuration uses dynamic volume provisioning to create a volume automatically.
  2. Create the following resource objects that are required to deploy an NFS server for metering. Use the oc create -f <file-name>.yaml command to create the object YAML files.

    1. Configure a PersistentVolume resource object:

      Example nfs_persistentvolume.yaml file

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: nfs
        labels:
          role: nfs-server
      spec:
        capacity:
          storage: 5Gi
        accessModes:
        - ReadWriteMany
        storageClassName: nfs-server 1
        nfs:
          path: "/"
          server: REPLACEME
        persistentVolumeReclaimPolicy: Delete

      1
      Must exactly match the [kind: StorageClass].metadata.name field value.
    2. Configure a Pod resource object with the nfs-server role:

      Example nfs_server.yaml file

      apiVersion: v1
      kind: Pod
      metadata:
        name: nfs-server
        labels:
          role: nfs-server
      spec:
        containers:
          - name: nfs-server
            image: <image_name> 1
            imagePullPolicy: IfNotPresent
            ports:
              - name: nfs
                containerPort: 2049
            securityContext:
              privileged: true
            volumeMounts:
            - mountPath: "/mnt/data"
              name: local
        volumes:
          - name: local
            emptyDir: {}

      1
      Install your NFS server image.
    3. Configure a Service resource object with the nfs-server role:

      Example nfs_service.yaml file

      apiVersion: v1
      kind: Service
      metadata:
        name: nfs-service
        labels:
          role: nfs-server
      spec:
        ports:
        - name: 2049-tcp
          port: 2049
          protocol: TCP
          targetPort: 2049
        selector:
          role: nfs-server
        sessionAffinity: None
        type: ClusterIP

    4. Configure a StorageClass resource object:

      Example nfs_storageclass.yaml file

      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: nfs-server 1
      provisioner: example.com/nfs
      parameters:
        archiveOnDelete: "false"
      reclaimPolicy: Delete
      volumeBindingMode: Immediate

      1
      Must exactly match the [kind: PersistentVolume].spec.storageClassName field value.
Warning

Configuration of your NFS storage, and any relevant resource objects, will vary depending on the NFS server image that you use for metering storage.

4.4. Configuring the Hive metastore

Hive metastore is responsible for storing all the metadata about the database tables created in Presto and Hive. By default, the metastore stores this information in a local embedded Derby database in a persistent volume attached to the pod.

Generally, the default configuration of the Hive metastore works for small clusters, but users may wish to improve performance or move storage requirements out of cluster by using a dedicated SQL database for storing the Hive metastore data.

4.4.1. Configuring persistent volumes

By default, Hive requires one persistent volume to operate.

hive-metastore-db-data is the main persistent volume claim (PVC) required by default. This PVC is used by the Hive metastore to store metadata about tables, such as table name, columns, and location. Hive metastore is used by Presto and the Hive server to look up table metadata when processing queries. You remove this requirement by using MySQL or PostgreSQL for the Hive metastore database.

To install, Hive metastore requires that dynamic volume provisioning is enabled in a storage class, a persistent volume of the correct size must be manually pre-created, or you use a pre-existing MySQL or PostgreSQL database.

4.4.1.1. Configuring the storage class for the Hive metastore

To configure and specify a storage class for the hive-metastore-db-data persistent volume claim, specify the storage class in your MeteringConfig custom resource. An example storage section with the class field is included in the metastore-storage.yaml file below.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  hive:
    spec:
      metastore:
        storage:
          # Default is null, which means using the default storage class if it exists.
          # If you wish to use a different storage class, specify it here
          # class: "null" 1
          size: "5Gi"
1
Uncomment this line and replace null with the name of the storage class to use. Leaving the value null will cause metering to use the default storage class for the cluster.

4.4.1.2. Configuring the volume size for the Hive metastore

Use the metastore-storage.yaml file below as a template to configure the volume size for the Hive metastore.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  hive:
    spec:
      metastore:
        storage:
          # Default is null, which means using the default storage class if it exists.
          # If you wish to use a different storage class, specify it here
          # class: "null"
          size: "5Gi" 1
1
Replace the value for size with your desired capacity. The example file shows "5Gi".

4.4.2. Use MySQL or PostgreSQL for the Hive metastore

The default installation of metering configures Hive to use an embedded Java database called Derby. This is unsuited for larger environments and can be replaced with either a MySQL or PostgreSQL database. Use the following example configuration files if your deployment requires a MySQL or PostgreSQL database for Hive.

There are 4 configuration options you can use to control the database used by Hive metastore: url, driver, username, and password.

Use the example configuration file below to use a MySQL database for Hive:

spec:
  hive:
    spec:
      metastore:
        storage:
          create: false
      config:
        db:
          url: "jdbc:mysql://mysql.example.com:3306/hive_metastore"
          driver: "com.mysql.jdbc.Driver"
          username: "REPLACEME"
          password: "REPLACEME"

You can pass additional JDBC parameters using the spec.hive.config.url. For more details see the MySQL Connector/J documentation.

Use the example configuration file below to use a PostgreSQL database for Hive:

spec:
  hive:
    spec:
      metastore:
        storage:
          create: false
      config:
        db:
          url: "jdbc:postgresql://postgresql.example.com:5432/hive_metastore"
          driver: "org.postgresql.Driver"
          username: "REPLACEME"
          password: "REPLACEME"

You can pass additional JDBC parameters using the URL. For more details see the PostgreSQL JDBC driver documentation.

4.5. Configuring the Reporting Operator

The Reporting Operator is responsible for collecting data from Prometheus, storing the metrics in Presto, running report queries against Presto, and exposing their results via an HTTP API. Configuring the Reporting Operator is primarily done in your MeteringConfig custom resource.

4.5.1. Securing a Prometheus connection

When you install metering on OpenShift Container Platform, Prometheus is available at https://prometheus-k8s.openshift-monitoring.svc:9091/.

To secure the connection to Prometheus, the default metering installation uses the OpenShift Container Platform certificate authority (CA). If your Prometheus instance uses a different CA, you can inject the CA through a config map. You can also configure the Reporting Operator to use a specified bearer token to authenticate with Prometheus.

Procedure

  • Inject the CA that your Prometheus instance uses through a config map. For example:

    spec:
      reporting-operator:
        spec:
          config:
            prometheus:
              certificateAuthority:
                useServiceAccountCA: false
                configMap:
                  enabled: true
                  create: true
                  name: reporting-operator-certificate-authority-config
                  filename: "internal-ca.crt"
                  value: |
                    -----BEGIN CERTIFICATE-----
                    (snip)
                    -----END CERTIFICATE-----

    Alternatively, to use the system certificate authorities for publicly valid certificates, set both useServiceAccountCA and configMap.enabled to false.

  • Specify a bearer token to authenticate with Prometheus. For example:
spec:
  reporting-operator:
    spec:
      config:
        prometheus:
          metricsImporter:
            auth:
              useServiceAccountToken: false
              tokenSecret:
                enabled: true
                create: true
                value: "abc-123"

4.5.2. Exposing the reporting API

On OpenShift Container Platform the default metering installation automatically exposes a route, making the reporting API available. This provides the following features:

  • Automatic DNS
  • Automatic TLS based on the cluster CA

Also, the default installation makes it possible to use the OpenShift service for serving certificates to protect the reporting API with TLS. The OpenShift OAuth proxy is deployed as a sidecar container for the Reporting Operator, which protects the reporting API with authentication.

4.5.2.1. Using OpenShift Authentication

By default, the reporting API is secured with TLS and authentication. This is done by configuring the Reporting Operator to deploy a pod containing both the Reporting Operator’s container, and a sidecar container running OpenShift auth-proxy.

To access the reporting API, the Metering Operator exposes a route. Once that route has been installed, you can run the following command to get the route’s hostname.

$ METERING_ROUTE_HOSTNAME=$(oc -n openshift-metering get routes metering -o json | jq -r '.status.ingress[].host')

Next, set up authentication using either a service account token or basic authentication with a username and password.

4.5.2.1.1. Authenticate using a service account token

With this method, you use the token in the Reporting Operator’s service account, and pass that bearer token to the Authorization header in the following command:

$ TOKEN=$(oc -n openshift-metering serviceaccounts get-token reporting-operator)
curl -H "Authorization: Bearer $TOKEN" -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]"

Be sure to replace the name=[Report Name] and format=[Format] parameters in the URL above. The format parameter can be json, csv, or tabular.

4.5.2.1.2. Authenticate using a username and password

Metering supports configuring basic authentication using a username and password combination, which is specified in the contents of an htpasswd file. By default, a secret containing empty htpasswd data is created. You can, however, configure the reporting-operator.spec.authProxy.htpasswd.data and reporting-operator.spec.authProxy.htpasswd.createSecret keys to use this method.

Once you have specified the above in your MeteringConfig resource, you can run the following command:

$ curl -u testuser:password123 -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]"

Be sure to replace testuser:password123 with a valid username and password combination.

4.5.2.2. Manually Configuring Authentication

To manually configure, or disable OAuth in the Reporting Operator, you must set spec.tls.enabled: false in your MeteringConfig resource.

Warning

This also disables all TLS and authentication between the Reporting Operator, Presto, and Hive. You would need to manually configure these resources yourself.

Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the OpenShift auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API isn’t exposed directly, but instead is proxied to via the auth-proxy sidecar container.

  • reporting-operator.spec.authProxy.enabled
  • reporting-operator.spec.authProxy.cookie.createSecret
  • reporting-operator.spec.authProxy.cookie.seed

You need to set reporting-operator.spec.authProxy.enabled and reporting-operator.spec.authProxy.cookie.createSecret to true and reporting-operator.spec.authProxy.cookie.seed to a 32-character random string.

You can generate a 32-character random string using the following command.

$ openssl rand -base64 32 | head -c32; echo.
4.5.2.2.1. Token authentication

When the following options are set to true, authentication using a bearer token is enabled for the reporting REST API. Bearer tokens can come from service accounts or users.

  • reporting-operator.spec.authProxy.subjectAccessReview.enabled
  • reporting-operator.spec.authProxy.delegateURLs.enabled

When authentication is enabled, the Bearer token used to query the reporting API of the user or service account must be granted access using one of the following roles:

  • report-exporter
  • reporting-admin
  • reporting-viewer
  • metering-admin
  • metering-viewer

The Metering Operator is capable of creating role bindings for you, granting these permissions by specifying a list of subjects in the spec.permissions section. For an example, see the following advanced-auth.yaml example configuration.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  permissions:
    # anyone in the "metering-admins" group can create, update, delete, etc any
    # metering.openshift.io resources in the namespace.
    # This also grants permissions to get query report results from the reporting REST API.
    meteringAdmins:
    - kind: Group
      name: metering-admins
    # Same as above except read only access and for the metering-viewers group.
    meteringViewers:
    - kind: Group
      name: metering-viewers
    # the default serviceaccount in the namespace "my-custom-ns" can:
    # create, update, delete, etc reports.
    # This also gives permissions query the results from the reporting REST API.
    reportingAdmins:
    - kind: ServiceAccount
      name: default
      namespace: my-custom-ns
    # anyone in the group reporting-readers can get, list, watch reports, and
    # query report results from the reporting REST API.
    reportingViewers:
    - kind: Group
      name: reporting-readers
    # anyone in the group cluster-admins can query report results
    # from the reporting REST API. So can the user bob-from-accounting.
    reportExporters:
    - kind: Group
      name: cluster-admins
    - kind: User
      name: bob-from-accounting

  reporting-operator:
    spec:
      authProxy:
        # htpasswd.data can contain htpasswd file contents for allowing auth
        # using a static list of usernames and their password hashes.
        #
        # username is 'testuser' password is 'password123'
        # generated htpasswdData using: `htpasswd -nb -s testuser password123`
        # htpasswd:
        #   data: |
        #     testuser:{SHA}y/2sYAj5yrQIN4TL0YdPdmGNKpc=
        #
        # change REPLACEME to the output of your htpasswd command
        htpasswd:
          data: |
            REPLACEME

Alternatively, you can use any role which has rules granting get permissions to reports/export. This means get access to the export sub-resource of the Report resources in the namespace of the Reporting Operator. For example: admin and cluster-admin.

By default, the Reporting Operator and Metering Operator service accounts both have these permissions, and their tokens can be used for authentication.

4.5.2.2.2. Basic authentication with a username and password

For basic authentication you can supply a username and password in the reporting-operator.spec.authProxy.htpasswd.data field. The username and password must be the same format as those found in an htpasswd file. When set, you can use HTTP basic authentication to provide your username and password that has a corresponding entry in the htpasswdData contents.

4.6. Configure AWS billing correlation

Metering can correlate cluster usage information with AWS detailed billing information, attaching a dollar amount to resource usage. For clusters running in EC2, you can enable this by modifying the example aws-billing.yaml file below.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  openshift-reporting:
    spec:
      awsBillingReportDataSource:
        enabled: true
        # Replace these with where your AWS billing reports are
        # stored in S3.
        bucket: "<your-aws-cost-report-bucket>" 1
        prefix: "<path/to/report>"
        region: "<your-buckets-region>"

  reporting-operator:
    spec:
      config:
        aws:
          secretName: "<your-aws-secret>" 2

  presto:
    spec:
      config:
        aws:
          secretName: "<your-aws-secret>" 3

  hive:
    spec:
      config:
        aws:
          secretName: "<your-aws-secret>" 4

To enable AWS billing correlation, first ensure the AWS Cost and Usage Reports are enabled. For more information, see Turning on the AWS Cost and Usage Report in the AWS documentation.

1
Update the bucket, prefix, and region to the location of your AWS Detailed billing report.
2 3 4
All secretName fields should be set to the name of a secret in the metering namespace containing AWS credentials in the data.aws-access-key-id and data.aws-secret-access-key fields. See the example secret file below for more details.
apiVersion: v1
kind: Secret
metadata:
  name: <your-aws-secret>
data:
  aws-access-key-id: "dGVzdAo="
  aws-secret-access-key: "c2VjcmV0Cg=="

To store data in S3, the aws-access-key-id and aws-secret-access-key credentials must have read and write access to the bucket. For an example of an IAM policy granting the required permissions, see the aws/read-write.json file below.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:HeadBucket",
                "s3:ListBucket",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::operator-metering-data/*", 1
                "arn:aws:s3:::operator-metering-data" 2
            ]
        }
    ]
}
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:HeadBucket",
                "s3:ListBucket",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::operator-metering-data/*", 3
                "arn:aws:s3:::operator-metering-data" 4
            ]
        }
    ]
}
1 2 3 4
Replace operator-metering-data with the name of your bucket.

This can be done either pre-installation or post-installation. Disabling it post-installation can cause errors in the Reporting Operator.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.