Chapter 2. Installing metering


Review the following sections before installing metering into your cluster.

To get started installing metering, first install the Metering Operator from OperatorHub. Next, configure your instance of metering by creating a MeteringConfig custom resource (CR). Installing the Metering Operator creates a default MeteringConfig resource that you can modify using the examples in the documentation. After creating your MeteringConfig resource, install the metering stack. Last, verify your installation.

2.1. Prerequisites

Metering requires the following components:

  • A StorageClass resource for dynamic volume provisioning. Metering supports a number of different storage solutions.
  • 4GB memory and 4 CPU cores available cluster capacity and at least one node with 2 CPU cores and 2GB memory capacity available.
  • The minimum resources needed for the largest single pod installed by metering are 2GB of memory and 2 CPU cores.

    • Memory and CPU consumption may often be lower, but will spike when running reports, or collecting data for larger clusters.

2.2. Installing the Metering Operator

You can install metering by deploying the Metering Operator. The Metering Operator creates and manages the components of the metering stack.

Note

You cannot create a project starting with openshift- using the web console or by using the oc new-project command in the CLI.

Note

If the Metering Operator is installed using a namespace other than openshift-metering, the metering reports are only viewable using the CLI. It is strongly suggested throughout the installation steps to use the openshift-metering namespace.

2.2.1. Installing metering using the web console

You can use the OpenShift Container Platform web console to install the Metering Operator.

Procedure

  1. Create a namespace object YAML file for the Metering Operator with the oc create -f <file-name>.yaml command. You must use the CLI to create the namespace. For example, metering-namespace.yaml:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-metering 1
      annotations:
        openshift.io/node-selector: "" 2
      labels:
        openshift.io/cluster-monitoring: "true"
    1
    It is strongly recommended to deploy metering in the openshift-metering namespace.
    2
    Include this annotation before configuring specific node selectors for the operand pods.
  2. In the OpenShift Container Platform web console, click Operators OperatorHub. Filter for metering to find the Metering Operator.
  3. Click the Metering card, review the package description, and then click Install.
  4. Select an Update Channel, Installation Mode, and Approval Strategy.
  5. Click Install.
  6. Verify that the Metering Operator is installed by switching to the Operators Installed Operators page. The Metering Operator has a Status of Succeeded when the installation is complete.

    Note

    It might take several minutes for the Metering Operator to appear.

  7. Click Metering on the Installed Operators page for Operator Details. From the Details page you can create different resources related to metering.

To complete the metering installation, create a MeteringConfig resource to configure metering and install the components of the metering stack.

2.2.2. Installing metering using the CLI

You can use the OpenShift Container Platform CLI to install the Metering Operator.

Procedure

  1. Create a Namespace object YAML file for the Metering Operator. You must use the CLI to create the namespace. For example, metering-namespace.yaml:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-metering 1
      annotations:
        openshift.io/node-selector: "" 2
      labels:
        openshift.io/cluster-monitoring: "true"
    1
    It is strongly recommended to deploy metering in the openshift-metering namespace.
    2
    Include this annotation before configuring specific node selectors for the operand pods.
  2. Create the Namespace object:

    $ oc create -f <file-name>.yaml

    For example:

    $ oc create -f openshift-metering.yaml
  3. Create the OperatorGroup object YAML file. For example, metering-og:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-metering 1
      namespace: openshift-metering 2
    spec:
      targetNamespaces:
      - openshift-metering
    1
    The name is arbitrary.
    2
    Specify the openshift-metering namespace.
  4. Create a Subscription object YAML file to subscribe a namespace to the Metering Operator. This object targets the most recently released version in the redhat-operators catalog source. For example, metering-sub.yaml:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: metering-ocp 1
      namespace: openshift-metering 2
    spec:
      channel: "4.5" 3
      source: "redhat-operators" 4
      sourceNamespace: "openshift-marketplace"
      name: "metering-ocp"
      installPlanApproval: "Automatic" 5
    1
    The name is arbitrary.
    2
    You must specify the openshift-metering namespace.
    3
    Specify 4.5 as the channel.
    4
    Specify the redhat-operators catalog source, which contains the metering-ocp package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM).
    5
    Specify "Automatic" install plan approval.

2.3. Installing the metering stack

After adding the Metering Operator to your cluster you can install the components of metering by installing the metering stack.

2.4. Prerequisites

Important

There can only be one MeteringConfig resource in the openshift-metering namespace. Any other configuration is not supported.

Procedure

  1. From the web console, ensure you are on the Operator Details page for the Metering Operator in the openshift-metering project. You can navigate to this page by clicking Operators Installed Operators, then selecting the Metering Operator.
  2. Under Provided APIs, click Create Instance on the Metering Configuration card. This opens a YAML editor with the default MeteringConfig resource file where you can define your configuration.

    Note

    For example configuration files and all supported configuration options, review the configuring metering documentation.

  3. Enter your MeteringConfig resource into the YAML editor and click Create.

The MeteringConfig resource begins to create the necessary resources for your metering stack. You can now move on to verifying your installation.

2.5. Verifying the metering installation

You can verify the metering installation by performing any of the following checks:

  • Check the Metering Operator ClusterServiceVersion (CSV) resource for the metering version. This can be done through either the web console or CLI.

    Procedure (UI)

    1. Navigate to Operators Installed Operators in the openshift-metering namespace.
    2. Click Metering Operator.
    3. Click Subscription for Subscription Details.
    4. Check the Installed Version.

    Procedure (CLI)

    • Check the Metering Operator CSV in the openshift-metering namespace:

      $ oc --namespace openshift-metering get csv

      Example output

      NAME                                           DISPLAY                  VERSION                 REPLACES   PHASE
      elasticsearch-operator.4.5.0-202006231303.p0   Elasticsearch Operator   4.5.0-202006231303.p0              Succeeded
      metering-operator.v4.5.0                       Metering                 4.5.0                              Succeeded

  • Check that all required pods in the openshift-metering namespace are created. This can be done through either the web console or CLI.

    Note

    Many pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator installation.

    Procedure (UI)

    • Navigate to Workloads Pods in the metering namespace and verify that pods are being created. This can take several minutes after installing the metering stack.

    Procedure (CLI)

    • Check that all required pods in the openshift-metering namespace are created:

      $ oc -n openshift-metering get pods

      Example output

      NAME                                  READY   STATUS    RESTARTS   AGE
      hive-metastore-0                      2/2     Running   0          3m28s
      hive-server-0                         3/3     Running   0          3m28s
      metering-operator-68dd64cfb6-2k7d9    2/2     Running   0          5m17s
      presto-coordinator-0                  2/2     Running   0          3m9s
      reporting-operator-5588964bf8-x2tkn   2/2     Running   0          2m40s

  • Verify that the ReportDataSource resources are beginning to import data, indicated by a valid timestamp in the EARLIEST METRIC column. This might take several minutes. Filter out the "-raw" ReportDataSource resources, which do not import data:

    $ oc get reportdatasources -n openshift-metering | grep -v raw

    Example output

    NAME                                         EARLIEST METRIC        NEWEST METRIC          IMPORT START           IMPORT END             LAST IMPORT TIME       AGE
    node-allocatable-cpu-cores                   2019-08-05T16:52:00Z   2019-08-05T18:52:00Z   2019-08-05T16:52:00Z   2019-08-05T18:52:00Z   2019-08-05T18:54:45Z   9m50s
    node-allocatable-memory-bytes                2019-08-05T16:51:00Z   2019-08-05T18:51:00Z   2019-08-05T16:51:00Z   2019-08-05T18:51:00Z   2019-08-05T18:54:45Z   9m50s
    node-capacity-cpu-cores                      2019-08-05T16:51:00Z   2019-08-05T18:29:00Z   2019-08-05T16:51:00Z   2019-08-05T18:29:00Z   2019-08-05T18:54:39Z   9m50s
    node-capacity-memory-bytes                   2019-08-05T16:52:00Z   2019-08-05T18:41:00Z   2019-08-05T16:52:00Z   2019-08-05T18:41:00Z   2019-08-05T18:54:44Z   9m50s
    persistentvolumeclaim-capacity-bytes         2019-08-05T16:51:00Z   2019-08-05T18:29:00Z   2019-08-05T16:51:00Z   2019-08-05T18:29:00Z   2019-08-05T18:54:43Z   9m50s
    persistentvolumeclaim-phase                  2019-08-05T16:51:00Z   2019-08-05T18:29:00Z   2019-08-05T16:51:00Z   2019-08-05T18:29:00Z   2019-08-05T18:54:28Z   9m50s
    persistentvolumeclaim-request-bytes          2019-08-05T16:52:00Z   2019-08-05T18:30:00Z   2019-08-05T16:52:00Z   2019-08-05T18:30:00Z   2019-08-05T18:54:34Z   9m50s
    persistentvolumeclaim-usage-bytes            2019-08-05T16:52:00Z   2019-08-05T18:30:00Z   2019-08-05T16:52:00Z   2019-08-05T18:30:00Z   2019-08-05T18:54:36Z   9m49s
    pod-limit-cpu-cores                          2019-08-05T16:52:00Z   2019-08-05T18:30:00Z   2019-08-05T16:52:00Z   2019-08-05T18:30:00Z   2019-08-05T18:54:26Z   9m49s
    pod-limit-memory-bytes                       2019-08-05T16:51:00Z   2019-08-05T18:40:00Z   2019-08-05T16:51:00Z   2019-08-05T18:40:00Z   2019-08-05T18:54:30Z   9m49s
    pod-persistentvolumeclaim-request-info       2019-08-05T16:51:00Z   2019-08-05T18:40:00Z   2019-08-05T16:51:00Z   2019-08-05T18:40:00Z   2019-08-05T18:54:37Z   9m49s
    pod-request-cpu-cores                        2019-08-05T16:51:00Z   2019-08-05T18:18:00Z   2019-08-05T16:51:00Z   2019-08-05T18:18:00Z   2019-08-05T18:54:24Z   9m49s
    pod-request-memory-bytes                     2019-08-05T16:52:00Z   2019-08-05T18:08:00Z   2019-08-05T16:52:00Z   2019-08-05T18:08:00Z   2019-08-05T18:54:32Z   9m49s
    pod-usage-cpu-cores                          2019-08-05T16:52:00Z   2019-08-05T17:57:00Z   2019-08-05T16:52:00Z   2019-08-05T17:57:00Z   2019-08-05T18:54:10Z   9m49s
    pod-usage-memory-bytes                       2019-08-05T16:52:00Z   2019-08-05T18:08:00Z   2019-08-05T16:52:00Z   2019-08-05T18:08:00Z   2019-08-05T18:54:20Z   9m49s

After all pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster.

2.6. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.