Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 30. Using Metering on Streams for Apache Kafka

download PDF

You can use the Metering tool that is available on OpenShift to generate metering reports from different data sources. As a cluster administrator, you can use metering to analyze what is happening in your cluster. You can either write your own, or use predefined SQL queries to define how you want to process data from the different data sources you have available. Using Prometheus as a default data source, you can generate reports on pods, namespaces, and most other OpenShift resources.

You can also use the OpenShift Metering operator to analyze your installed Streams for Apache Kafka components to determine whether you are in compliance with your Red Hat subscription.

To use metering with Streams for Apache Kafka, you must first install and configure the Metering operator on OpenShift Container Platform.

30.1. Metering resources

Metering has many resources which can be used to manage the deployment and installation of metering, as well as the reporting functionality metering provides. Metering is managed using the following CRDs:

Table 30.1. Metering resources
NameDescription

MeteringConfig

Configures the metering stack for deployment. Contains customizations and configuration options to control each component that makes up the metering stack.

Reports

Controls what query to use, when, and how often the query should be run, and where to store the results.

ReportQueries

Contains the SQL queries used to perform analysis on the data contained within ReportDataSources.

ReportDataSources

Controls the data available to ReportQueries and Reports. Allows configuring access to different databases for use within metering.

30.2. Metering labels for Streams for Apache Kafka

The following table lists the metering labels for Streams for Apache Kafka infrastructure components and integrations.

Table 30.2. Metering Labels
LabelPossible values

com.company

Red_Hat

rht.prod_name

Red_Hat_Application_Foundations

rht.prod_ver

2024.Q2

rht.comp

AMQ_Streams

rht.comp_ver

2.7

rht.subcomp

Infrastructure

cluster-operator

entity-operator

topic-operator

user-operator

zookeeper

Application

kafka-broker

kafka-connect

kafka-connect-build

kafka-mirror-maker2

kafka-mirror-maker

cruise-control

kafka-bridge

kafka-exporter

drain-cleaner

rht.subcomp_t

infrastructure

application

Examples

  • Infrastructure example (where the infrastructure component is entity-operator)

    com.company=Red_Hat
    rht.prod_name=Red_Hat_Application_Foundations
    rht.prod_ver=2024.Q2
    rht.comp=AMQ_Streams
    rht.comp_ver=2.7
    rht.subcomp=entity-operator
    rht.subcomp_t=infrastructure
  • Application example (where the integration deployment name is kafka-bridge)

    com.company=Red_Hat
    rht.prod_name=Red_Hat_Application_Foundations
    rht.prod_ver=2024.Q2
    rht.comp=AMQ_Streams
    rht.comp_ver=2.7
    rht.subcomp=kafka-bridge
    rht.subcomp_t=application
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.