Using the Streams for Apache Kafka Console
The Streams for Apache Kafka console supports your deployment of Streams for Apache Kafka.
Abstract
Preface Copy linkLink copied to clipboard!
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback on our documentation.
To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.
Prerequisite
-
You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance.
If you do not have an account, you will be prompted to create one.
Procedure
- Click the following: Create issue.
- In the Summary text box, enter a brief description of the issue.
In the Description text box, provide the following information:
- The URL of the page where you found the issue.
-
A detailed description of the issue.
You can leave the information in any other fields at their default values.
- Add a reporter name.
- Click Create to submit the Jira issue to the documentation team.
Thank you for taking the time to provide feedback.
Technology preview Copy linkLink copied to clipboard!
The Streams for Apache Kafka Console is a technology preview.
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope.
Chapter 1. Administering Kafka Clusters with Streams for Apache Kafka Console Copy linkLink copied to clipboard!
The Streams for Apache Kafka Console provides a user interface to facilitate the administration of Kafka clusters, delivering real-time insights for monitoring, managing, and optimizing each cluster from its user interface.
Chapter 2. Connecting the Streams for Apache Kafka Console to a Kafka cluster Copy linkLink copied to clipboard!
Deploy the Streams for Apache Kafka Console to the same OpenShift cluster as the Kafka cluster managed by Streams for Apache Kafka. Use the installation files provided with the Streams for Apache Kafka Console.
For each Kafka cluster, the configuration of the Kafka
resource used to install the cluster requires the following:
- Sufficient authorization for the console to connect to the cluster.
- Prometheus enabled and able to scrape metrics from the cluster.
-
Metrics configuration (through a
ConfigMap
) for exporting metrics in a format suitable for Prometheus.
The Streams for Apache Kafka Console requires a Kafka user, configured as KafkaUser
custom resource, for the console to access the cluster as an authenticated and authorized user.
When you configure the KafkaUser
authentication and authorization mechanisms, ensure they match the equivalent Kafka
configuration.
-
KafkaUser.spec.authentication
matchesKafka.spec.kafka.listeners[*].authentication
-
KafkaUser.spec.authorization
matchesKafka.spec.kafka.authorization
Prometheus must be installed and configured to scrape metrics from Kubernetes and Kafka clusters and populate the metrics graphs in the console.
Prerequisites
-
Installation requires an OpenShift user with
cluster-admin
role, such assystem:admin
. - An OpenShift 4.12 to 4.15 cluster.
- A Kafka cluster managed by Streams for Apache Kafka running on the OpenShift cluster.
- The Prometheus Operator, which must be a separate operator from the one deployed for OpenShift monitoring.
-
The
oc
command-line tool is installed and configured to connect to the OpenShift cluster. Secret values for session management and authentication within the console.
You can use the OpenSSL TLS management tool for generating the values as follows:
SESSION_SECRET=$(LC_CTYPE=C openssl rand -base64 32) echo "Generated SESSION_SECRET: $SESSION_SECRET" NEXTAUTH_SECRET=$(LC_CTYPE=C openssl rand -base64 32) echo "Generated NEXTAUTH_SECRET: $NEXTAUTH_SECRET"
SESSION_SECRET=$(LC_CTYPE=C openssl rand -base64 32) echo "Generated SESSION_SECRET: $SESSION_SECRET" NEXTAUTH_SECRET=$(LC_CTYPE=C openssl rand -base64 32) echo "Generated NEXTAUTH_SECRET: $NEXTAUTH_SECRET"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
openssl help
for command-line descriptions of the options used.
In addition to the files to install the console, pre-configured files to install the Streams for Apache Kafka Operator, the Prometheus Operator, a Prometheus instance, and a Kafka cluster are also included with the Streams for Apache Kafka Console installation artifacts. In this procedure, we assume the operators are installed. The installation files offer the quickest way to set up and try the console, though you can use your own deployments of Streams for Apache Kafka and Prometheus.
Procedure
Download and extract the Streams for Apache Kafka Console installation artifacts.
The artifacts are included with installation and example files available from the Streams for Apache Kafka software downloads page.
The files contain the deployment configuration required for the console, the Kafka cluster, and Prometheus.
The example Kafka configuration creates a route listener that the console uses to connect to the Kafka cluster. As the console and the Kafka cluster are deployed on the same OpenShift cluster, you can use the internal bootstrap address of the Kafka cluster instead of a route.
Create a Prometheus instance with the configuration required by the console by applying the Prometheus installation files:
Edit
${NAMESPACE}
in theconsole-prometheus-server.clusterrolebinding.yaml
file to use the namespace the Prometheus instance is going to be installed into:sed -i 's/${NAMESPACE}/'"my-project"'/g' <resource_path>/console-prometheus-server.clusterrolebinding.yaml
sed -i 's/${NAMESPACE}/'"my-project"'/g' <resource_path>/console-prometheus-server.clusterrolebinding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, in this procedure we are installing to the
my-project
namespace. The configuration binds the role for Prometheus with its service account.Create the Prometheus instance by applying the installation files in this order:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The instance is named
console-prometheus
and the URL of the service for connecting the console ishttp://prometheus-operated.my-project.svc.cluster.local:9090
, withmy-project
taken from the namespace name.NoteNo route is deployed for the
console-prometheus
instance as it does not need to be accessible from outside the OpenShift cluster.
Create and deploy a Kafka cluster.
If you are using the console with a Kafka cluster operating in KRaft mode, update the metrics configuration for the cluster in the
console-kafka-metrics.configmap.yaml
file:- Uncomment the KRaft-related metrics configuration.
- Comment out the ZooKeeper related metrics.
This file contains the metrics configuration required by the console.
Edit the
KafkaUser
custom resource in theconsole-kafka-user1.kafkauser.yaml
file by adding ACL types to provide authorized access for the console to the Kafka cluster.At a minimum, the Kafka user requires the following ACL rules:
-
Describe
,DescribeConfigs
permissions for thecluster
resource -
Read
,Describe
,DescribeConfigs
permissions for alltopic
resources Read
,Describe
permissions for allgroup
resourcesExample user authorization settings
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Edit the
console-kafka.kafka.yaml
file to replace the placeholders:sed -i 's/type: ${LISTENER_TYPE}/type: route/g' console-kafka.kafka.yaml sed -i 's/\${CLUSTER_DOMAIN}/'"<my_router_base_domain>"'/g' console-kafka.kafka.yaml
sed -i 's/type: ${LISTENER_TYPE}/type: route/g' console-kafka.kafka.yaml sed -i 's/\${CLUSTER_DOMAIN}/'"<my_router_base_domain>"'/g' console-kafka.kafka.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This file contains the
Kafka
custom resource configuration to create the Kafka cluster.These commands do the following:
-
Replace
type: ${LISTENER_TYPE}
withtype: route
. While this example uses aroute
type, you can replace${LISTENER_TYPE}
with any valid listener type for your deployment. -
Replace
${CLUSTER_DOMAIN}
with the value of the base domain required to specify the route listener hosts used by the bootstrap and per-broker services. By default,route
listener hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying hosts.
Alternatively, you can copy the example configuration to your own Kafka deployment.
-
Replace
Create the Kafka cluster by applying the installation files in this order:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are using your own Kafka cluster, apply the updated
Kafka
resource configuration instead ofconsole-kafka.kafka.yaml
.The installation files create a Kafka cluster as well as a Kafka user and the metrics configuration required by the console for connecting to the cluster A Kafka user and metrics configuration are required for each Kafka cluster you want to monitor through the console. Each Kafka user requires a unique name.
If the Kafka cluster is in a different namespace from your Prometheus instance, modify the
kafka-resources.podmonitor.yaml
file to include anamespaceSelector
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This ensures that Prometheus can monitor the Kafka pods. Replace
<kafka_namespace>
with the actual namespace where your Kafka cluster is deployed.
Check the status of the deployment:
oc get pods -n <my_console_namespace>
oc get pods -n <my_console_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the operators and cluster readiness
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here,
console-kafka
is the name of the cluster.A pod ID identifies the pods created.
With the default deployment, you install three pods.
READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.
Install the Streams for Apache Kafka Console.
Edit the
console-server.clusterrolebinding.yaml
file to use the namespace the console instance is going to be installed into:sed -i 's/${NAMESPACE}/'"my-project"'/g' /<resource_path>console-server.clusterrolebinding.yaml
sed -i 's/${NAMESPACE}/'"my-project"'/g' /<resource_path>console-server.clusterrolebinding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The configuration binds the role for the console with its service account.
Install the console user interface and route to the interface by applying the installation files in this order:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The install creates the role, role binding, service account, services, and route necessary to run the console user interface.
Create a
Secret
calledconsole-ui-secrets
containing two secret values (as described in the prerequisites) for session management and authentication within the console:oc create secret generic console-ui-secrets -n my-project \ --from-literal=SESSION_SECRET="<session_secret_value>" \ --from-literal=NEXTAUTH_SECRET="<next_secret_value>"
oc create secret generic console-ui-secrets -n my-project \ --from-literal=SESSION_SECRET="<session_secret_value>" \ --from-literal=NEXTAUTH_SECRET="<next_secret_value>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The secrets are mounted as environment variables when the console is deployed.
Get the hostname for the route created for the console user interface:
oc get route console-ui-route -n my-project -o jsonpath='{.spec.host}'
oc get route console-ui-route -n my-project -o jsonpath='{.spec.host}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The hostname is required for access to the console user interface.
Edit the
console.deployment.yaml
file to replace the placeholders:sed -i 's/${CONSOLE_HOSTNAME}/'"<route_hostname>"'/g' console.deployment.yaml sed -i 's/${NAMESPACE}/'"my-project"'/g' console.deployment.yaml
sed -i 's/${CONSOLE_HOSTNAME}/'"<route_hostname>"'/g' console.deployment.yaml sed -i 's/${NAMESPACE}/'"my-project"'/g' console.deployment.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These commands do the following:
-
Replace
https://${CONSOLE_HOSTNAME}
withhttps://<route_hostname>
, which is the route used to access the console user interface. -
Replace
${NAMESPACE}
with themy-project
namespace name inhttp://prometheus-operated.${NAMESPACE}.svc.cluster.local:9090
, which is the URL of the Prometheus instance used by the console.
If you are using your own Kafka cluster, ensure that the correct cluster name is used and other environment variables are configured with the correct values. The values enable the console to connect with the cluster and retrieve metrics.
-
Replace
Install the console:
oc apply -n my-project -f <resource_path>/console.deployment.yaml
oc apply -n my-project -f <resource_path>/console.deployment.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the console readiness
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Adding the example configuration to your own Kafka cluster
If you already have a Kafka cluster installed, you can update the Kafka
resource with the required configuration. When applying the cluster configuration files, use the updated Kafka
resource rather than using the Kafka
resource provided with the Streams for Apache Kafka Console installation files.
The Kafka
resource requires the following configuration:
-
A
route
listener to expose the cluster for console connection - Prometheus metrics enabled for retrieving metrics on the cluster. Add the same configuration for ZooKeeper if you are using ZooKeeper for metadata management.
-
If the cluster name does not match the cluster name used in the console deployment files (
console-kafka
), update the deployment files that reference the name of the Kafka cluster such asconsole-kafka-user1.kafkauser.yaml
.
The Prometheus metrics configuration must reference the ConfigMap
that provides the metrics configuration required by the console. The metrics configuration is provided in the console-cluster-metrics.configmap.yaml
resource configuration file.
Example Kafka cluster configuration for console connection
- 1
- Listener to expose the cluster for console connection. In this example, a route listener is configured.
- 2
- Prometheus metrics, which are enabled by referencing a
ConfigMap
containing configuration for the Prometheus JMX exporter. - 3
- Add ZooKeeper configuration only if you are using Streams for Apache Kafka with ZooKeeper for cluster management. It is not required in KRaft mode.
Checking the console deployment environment variables
If you are using your own Kafka cluster, check the deployment configuration for the console has the required environment variables.
The following prefixes determine the scope of the environment variable values:
-
KAFKA
represents configuration for all Kafka clusters. -
CONSOLE_KAFKA_<UNIQUE_NAME_ID_FOR_CLUSTER>
represents configuration for each specific cluster.
Example console deployment configuration
- 1
- The security protocol used for communication with Kafka brokers.
- 2
- The SASL mechanism for console (client) authentication to the Kafka brokers.
- 3
- Must match the namespace and the name specified for the cluster in its
Kafka
resource configuration. - 4
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. In this example, a route listener address is being used. The listener was configured in the
Kafka
resource. - 5
- Authentication credentials for the Kafka user representing the console mounted as a
Secret
. Theconsole-kafka-user1
secret is created automatically when the corresponding user is created. Thesasl.jaas.config
property within the secret contains JAAS configuration for SASL authentication. - 6
- Secret for authentication within the console.
- 7
- Secret for session management within the console
- 8
- The URL to connect to the Streams for Apache Kafka user interface and for users to access the console.
- 9
- The backend server that the console user interface communicates with for data retrieval.
- 10
- The URL to connect to the Prometheus instance, which includes the namespace (
my-project
) of theKafka
resource.
Chapter 4. HOME: Checking connected clusters Copy linkLink copied to clipboard!
The homepage offers a snapshot of connected Kafka clusters, providing the status of brokers and a count of associated consumer groups. As you explore topics, the homepage conveniently presents details about your recent topic views.
To find more information:
- Click on a cluster name to find cluster metrics in the Cluster overview page.
- Click on a recently viewed topic to retrieve details about that particular topic.
Chapter 5. Cluster overview page Copy linkLink copied to clipboard!
The Cluster overview page shows the status of a Kafka cluster. Here, you can assess the readiness of Kafka brokers, identify any cluster errors or warnings, and gain crucial insights into the cluster’s health. At a glance, the page provides information on the number of topics and partitions within the cluster, along with their replication status. Explore cluster metrics through charts displaying used disk space, CPU utilization, and memory usage. Additionally, topic metrics offer a comprehensive view of total incoming and outgoing byte rates for all topics in the Kafka cluster.
5.1. Accessing cluster connection details for client access Copy linkLink copied to clipboard!
When connecting a client to a Kafka cluster, retrieve the necessary connection details from the Cluster overview page by following these steps.
Procedure
- From the Streams for Apache Kafka Console, click the name of the Kafka cluster that you want to connect to, then click Cluster overview and Cluster connection details.
- Copy and add bootstrap address and connection properties to your Kafka client configuration to establish a connection with the Kafka cluster.
Ensure that the authentication type used by the client matches the authentication type configured for the Kafka cluster.
Chapter 6. Topics page Copy linkLink copied to clipboard!
The Topics page shows all the topics created for a Kafka cluster. Use this page to check information on topics.
The Topics page shows the overall replication status for partitions in the topic, as well as counts for the partitions in the topic and the number of associated consumer groups. The overall storage used by the topic is also shown.
Internal topics must not be modified. You can choose to hide internal topics from the list of topics returned on the Topics page.
By clicking on a topic name, additional topic information is presented on a series of tabs:
- Messages
- Messages shows the message log for a topic.
- Partitions
- Partitions shows the replication status of each partition in a topic.
- Consumer groups
- Consumer groups lists the names and status of the consumer groups and group members connected to a topic.
- Configuration
- Configuration shows the configuration of a topic.
If a topic is shown as Managed, it means that is managed using the Streams for Apache Kafka Topic Operator and was not created directly in the Kafka cluster.
Use the information provided on the tabs to check and modify the configuration of your topics.
6.1. Checking topic messages Copy linkLink copied to clipboard!
Track the flow of messages for a specific topic from the Messages tab. The Messages tab presents a chronological list of messages for a topic.
Procedure
- From the Streams for Apache Kafka Console, click the name of the Kafka cluster, then click Topics.
- Click the name of the topic you want to check.
Check the information on the Messages tab.
For each message, you can see its timestamp (in UTC), offset, key, and value.
By clicking on a message, you can see the full message details.
Click the Manage columns icon (represented as two columns) to choose the information to display.
Click the search dropdown and select the advanced search options to refine your search.
Choose to display the latest messages or messages from a specified time or offset. You can display messages for all partitions or a specified partition.
When you are done, you can click the CSV icon (represented as a CSV file) to download the information on the returned messages.
Refining your search
In this example, search terms, and message, retrieval, and partition options are combined:
-
messages=timestamp:2024-03-01T00:00:00Z retrieve=50 partition=1 Error on page load where=value
The filter searches for the text "Error on page load" in partition 1 as a message value, starting from March 1, 2024, and retrieves up to 50 messages.
- Search terms
Enter search terms as text (has the words) to find specific matches and define where in a message to look for the term. You can search anywhere in the message or narrow the search to the key, header, or value.
For example:
-
messages=latest retrieve=100 642-26-1594 where=key
This example searches the latest 100 messages on message key
642-26-1594
.-
- Message options
Set the starting point for returning messages.
Latest to start from the latest message.
-
messages=latest
-
Timestamp to start from an exact time and date in ISO 8601 format.
-
messages=timestamp:2024-03-14T00:00:00Z
-
Offset to start from an offset in a partition. In some cases, you may want to specify an offset without a partition. However, the most common scenario is to search by offset within a specific partition.
-
messages=offset:5600253 partition=0
-
Unix Timestamp to start from a time and date in Unix format.
-
messages=epoch:1
-
- Retrieval options
Set a retrieval option.
Number of messages to return a specified number of messages.
-
messages=latest retrieve=50
-
Continuously to return the latest messages in real-time. Click the pause button (represented by two vertical lines) to pause the refresh. Unpause to continue the refresh.
-
retrieve=continuously
-
- Partition options
- Choose to run a search against all partitions or a specific partition.
6.2. Checking topic partitions Copy linkLink copied to clipboard!
Check the partitions for a specific topic from the Partitions tab. The Partitions tab presents a list of partitions belonging to a topic.
Procedure
- From the Streams for Apache Kafka Console, click the name of the Kafka cluster, then click Topics.
- Click the name of the topic you want to check from the Topics page.
- Check the information on the Partitions tab.
For each partition, you can see its replication status, as well as information on designated partition leaders, replica brokers, and the amount of data stored by the partition.
You can view partitions by replication status:
- In-sync
-
All partitions in the topic are fully replicated. A partition is fully-replicated when its replicas (followers) are 'in-sync' with the designated partition leader. Replicas are 'in-sync' if they have fetched records up to the log end offset of the leader partition within an allowable lag time, as determined by
replica.lag.time.max.ms
. - Under-replicated
- A partition is under-replicated if some of its replicas (followers) are not in-sync. An under-replicated status signals potential issues in data replication.
- Offline
- Some or all partitions in the topic are currently unavailable. This may be due to issues such as broker failures or network problems, which need investigating and addressing.
You can also check information on the broker designated as partition leader and the brokers that contain the replicas:
- Leader
- The leader handles all produce requests. Followers on other brokers replicate the leader’s data. A follower is considered in-sync if it catches up with the leader’s latest committed message.
- Preferred leader
- When creating a new topic, Kafka’s leader election algorithm assigns a leader from the list of replicas for each partition. The algorithm aims for a balanced spread of leadership assignments. A "Yes" value indicates the current leader is the preferred leader, suggesting a balanced leadership distribution. A "No" value may suggest imbalances in the leadership assignments, requiring further investigation. If the leadership assignments of partitions are not well-balanced, it can contribute to size discrepancies. A well-balanced Kafka cluster should distribute leadership roles across brokers evenly.
- Replicas
- Followers that replicate the leader’s data. Replicas provide fault tolerance and data availability.
Discrepancies in the distribution of data across brokers may indicate balancing issues in the Kafka cluster. If certain brokers are consistently handling larger amounts of data, it may indicate that partitions are not evenly distributed across the brokers. This could lead to uneven resource utilization and potentially impact the performance of those brokers.
6.3. Checking topic consumer groups Copy linkLink copied to clipboard!
Check the consumer groups for a specific topic from the Consumer groups tab. The Consumer groups tab presents a list of consumer groups associated with a topic.
Procedure
- From the Streams for Apache Kafka Console, click the name of the Kafka cluster, then click Topics.
- Click the name of the topic you want to check from the Topics page.
- Check the information on the Consumer groups tab.
- To check consumer group members, click the consumer group name.
For each consumer group, you can see its status, the overall consumer lag across all partitions, and the number of members. For more information on checking consumer groups, see Chapter 8, Consumer groups page.
For each group member, you see the unique (consumer) client ID assigned to the consumer within the consumer group, overall consumer lag, and the number of assigned partitions. For more information on checking consumer group members, see Section 8.1, “Checking consumer group members”.
Monitoring consumer group behavior is essential for ensuring optimal distribution of messages between consumers.
6.4. Checking topic configuration Copy linkLink copied to clipboard!
Check the configuration of a specific topic from the Configuration tab. The Configuration tab presents a list of configuration values for the topic.
Procedure
- From the Streams for Apache Kafka Console, click the name of the Kafka cluster, then click Topics.
- Click the name of the topic you want to check from the Topics page.
- Check the information on the Configuration tab.
You can filter for the properties you wish to check, including selecting by data source:
- DEFAULT_CONFIG properties have a predefined default value. This value is used when there are no user-defined values for those properties.
- STATIC_BROKER_CONFIG properties have predefined values that apply to the entire broker and, by extension, to all topics managed by that broker. This value is used when there are no user-defined values for those properties.
- DYNAMIC_TOPIC_CONFIG property values have been configured for a specific topic and override the default configuration values.
The Streams for Apache Kafka Topic Operator simplifies the process of creating managing Kafka topics using KafkaTopic
resources.
Chapter 7. Brokers page Copy linkLink copied to clipboard!
The Brokers page shows all the brokers created for a Kafka cluster. For each broker, you can see its status, as well as the distribution of partitions across the brokers, including the number of partition leaders and replicas.
The broker status is shown as one of the following:
- Stable
- A stable broker is operating normally without significant issues.
- Unstable
- An unstable broker may be experiencing issues, such as high resource usage or network problems.
If the broker has a rack ID, this is the ID of the rack or datacenter in which the broker resides.
Click on the right arrow (>) next to a broker name to see more information about the broker, including its hostname and disk usage.
Consider rebalancing if the distribution is uneven to ensure efficient resource utilization.
Chapter 8. Consumer groups page Copy linkLink copied to clipboard!
The Consumer groups page shows all the consumer groups associated with a Kafka cluster. For each consumer group, you can see its status, the overall consumer lag across all partitions, and the number of members. Click on associated topics to show the topic information available from the Topics page tabs.
Consumer group status can be one of the following:
- Stable indicates normal functioning
- Rebalancing indicates ongoing adjustments to the consumer group’s members.
- Empty suggests no active members. If in the empty state, consider adding members to the group.
Check group members by clicking on a consumer group name. For more information on checking consumer group members, see Section 8.1, “Checking consumer group members”.
8.1. Checking consumer group members Copy linkLink copied to clipboard!
Check the members of a specific consumer group from the Consumer groups page.
Procedure
- From the Streams for Apache Kafka Console, click the name of the Kafka cluster, then click Consumer groups.
- Click the name of the consumer group you want to check from the Consumer groups page.
- Click on the right arrow (>) next to a member ID to see the topic partitions a member is associated with, as well as any possible consumer lag.
For each group member, you see the unique (consumer) client ID assigned to the consumer within the consumer group, overall consumer lag, and the number of assigned partitions.
Any consumer lag for a specific topic partition reflects the gap between the last message a consumer has picked up (committed offset position) and the latest message written by the producer (end offset position).
Appendix A. Using your subscription Copy linkLink copied to clipboard!
Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.
Accessing Your Account
- Go to access.redhat.com.
- If you do not already have an account, create one.
- Log in to your account.
Activating a Subscription
- Go to access.redhat.com.
- Navigate to My Subscriptions.
- Navigate to Activate a subscription and enter your 16-digit activation number.
Downloading Zip and Tar Files
To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.
- Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
- Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
- Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
- Click the Download link for your component.
Installing packages with DNF
To install a package and all the package dependencies, use:
dnf install <package_name>
dnf install <package_name>
To install a previously-downloaded package from a local directory, use:
dnf install <path_to_download_package>
dnf install <path_to_download_package>
Revised on 2024-07-19 07:42:32 UTC