Ricerca

Questo contenuto non è disponibile nella lingua selezionata.

Chapter 8. Visualizing logs

download PDF

8.1. About log visualization

You can visualize your log data in the OpenShift Container Platform web console, or the Kibana web console, depending on your deployed log storage solution. The Kibana console can be used with ElasticSearch log stores, and the OpenShift Container Platform web console can be used with the ElasticSearch log store or the LokiStack.

Note

The Kibana web console is now deprecated is planned to be removed in a future logging release.

8.1.1. Configuring the log visualizer

You can configure which log visualizer type your logging uses by modifying the ClusterLogging custom resource (CR).

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have created a ClusterLogging CR.
Important

If you want to use the OpenShift Container Platform web console for visualization, you must enable the logging Console Plugin. See the documentation about "Log visualization with the web console".

Procedure

  1. Modify the ClusterLogging CR visualization spec:

    ClusterLogging CR example

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
    # ...
    spec:
    # ...
      visualization:
        type: <visualizer_type> 1
        kibana: 2
          resources: {}
          nodeSelector: {}
          proxy: {}
          replicas: {}
          tolerations: {}
        ocpConsole: 3
          logsLimit: {}
          timeout: {}
    # ...

    1
    The type of visualizer you want to use for your logging. This can be either kibana or ocp-console. The Kibana console is only compatible with deployments that use Elasticsearch log storage, while the OpenShift Container Platform console is only compatible with LokiStack deployments.
    2
    Optional configurations for the Kibana console.
    3
    Optional configurations for the OpenShift Container Platform web console.
  2. Apply the ClusterLogging CR by running the following command:

    $ oc apply -f <filename>.yaml

8.1.2. Viewing logs for a resource

Resource logs are a default feature that provides limited log viewing capability. You can view the logs for various resources, such as builds, deployments, and pods by using the OpenShift CLI (oc) and the web console.

Tip

To enhance your log retrieving and viewing experience, install the logging. The logging aggregates all the logs from your OpenShift Container Platform cluster, such as node system audit logs, application container logs, and infrastructure logs, into a dedicated log store. You can then query, discover, and visualize your log data through the Kibana console or the OpenShift Container Platform web console. Resource logs do not access the logging log store.

8.1.2.1. Viewing resource logs

You can view the log for various resources in the OpenShift CLI (oc) and web console. Logs read from the tail, or end, of the log.

Prerequisites

  • Access to the OpenShift CLI (oc).

Procedure (UI)

  1. In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate.

    Note

    Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource.

  2. Select a project from the drop-down menu.
  3. Click the name of the pod you want to investigate.
  4. Click Logs.

Procedure (CLI)

  • View the log for a specific pod:

    $ oc logs -f <pod_name> -c <container_name>

    where:

    -f
    Optional: Specifies that the output follows what is being written into the logs.
    <pod_name>
    Specifies the name of the pod.
    <container_name>
    Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name.

    For example:

    $ oc logs ruby-58cd97df55-mww7r
    $ oc logs -f ruby-57f7f4855b-znl92 -c ruby

    The contents of log files are printed out.

  • View the log for a specific resource:

    $ oc logs <object_type>/<resource_name> 1
    1
    Specifies the resource type and name.

    For example:

    $ oc logs deployment/ruby

    The contents of log files are printed out.

8.2. Log visualization with the web console

You can use the OpenShift Container Platform web console to visualize log data by configuring the logging Console Plugin. Options for configuration are available during installation of logging on the web console.

If you have already installed logging and want to configure the plugin, use one of the following procedures.

8.2.1. Enabling the logging Console Plugin after you have installed the Red Hat OpenShift Logging Operator

You can enable the logging Console Plugin as part of the Red Hat OpenShift Logging Operator installation, but you can also enable the plugin if you have already installed the Red Hat OpenShift Logging Operator with the plugin disabled.

Prerequisites

  • You have administrator permissions.
  • You have installed the Red Hat OpenShift Logging Operator and selected Disabled for the Console plugin.
  • You have access to the OpenShift Container Platform web console.

Procedure

  1. In the OpenShift Container Platform web console Administrator perspective, navigate to Operators Installed Operators.
  2. Click Red Hat OpenShift Logging. This takes you to the Operator Details page.
  3. In the Details page, click Disabled for the Console plugin option.
  4. In the Console plugin enablement dialog, select Enable.
  5. Click Save.
  6. Verify that the Console plugin option now shows Enabled.
  7. The web console displays a pop-up window when changes have been applied. The window prompts you to reload the web console. Refresh the browser when you see the pop-up window to apply the changes.

8.2.2. Configuring the logging Console Plugin when you have the Elasticsearch log store and LokiStack installed

In logging version 5.8 and later, if the Elasticsearch log store is your default log store but you have also installed the LokiStack, you can enable the logging Console Plugin by using the following procedure.

Prerequisites

  • You have administrator permissions.
  • You have installed the Red Hat OpenShift Logging Operator, the OpenShift Elasticsearch Operator, and the Loki Operator.
  • You have installed the OpenShift CLI (oc).
  • You have created a ClusterLogging custom resource (CR).

Procedure

  1. Ensure that the logging Console Plugin is enabled by running the following command:

    $ oc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin  \
    || oc patch consoles.operator.openshift.io cluster  --type=merge \
    --patch '{ "spec": { "plugins": ["logging-view-plugin"]}}'
  2. Add the .metadata.annotations.logging.openshift.io/ocp-console-migration-target: lokistack-dev annotation to the ClusterLogging CR, by running the following command:

    $ oc patch clusterlogging instance --type=merge --patch \
    '{ "metadata": { "annotations": { "logging.openshift.io/ocp-console-migration-target": "lokistack-dev" }}}' \
    -n openshift-logging

    Example output

    clusterlogging.logging.openshift.io/instance patched

Verification

  • Verify that the annotation was added successfully, by running the following command and observing the output:

    $ oc get clusterlogging instance \
    -o=jsonpath='{.metadata.annotations.logging\.openshift\.io/ocp-console-migration-target}' \
    -n openshift-logging

    Example output

    "lokistack-dev"

The logging Console Plugin pod is now deployed. You can view logging data by navigating to the OpenShift Container Platform web console and viewing the Observe Logs page.

8.3. Viewing cluster dashboards

The Logging/Elasticsearch Nodes and Openshift Logging dashboards in the OpenShift Container Platform web console contain in-depth details about your Elasticsearch instance and the individual Elasticsearch nodes that you can use to prevent and diagnose problems.

The OpenShift Logging dashboard contains charts that show details about your Elasticsearch instance at a cluster level, including cluster resources, garbage collection, shards in the cluster, and Fluentd statistics.

The Logging/Elasticsearch Nodes dashboard contains charts that show details about your Elasticsearch instance, many at node level, including details on indexing, shards, resources, and so forth.

8.3.1. Accessing the Elasticsearch and OpenShift Logging dashboards

You can view the Logging/Elasticsearch Nodes and OpenShift Logging dashboards in the OpenShift Container Platform web console.

Procedure

To launch the dashboards:

  1. In the OpenShift Container Platform web console, click Observe Dashboards.
  2. On the Dashboards page, select Logging/Elasticsearch Nodes or OpenShift Logging from the Dashboard menu.

    For the Logging/Elasticsearch Nodes dashboard, you can select the Elasticsearch node you want to view and set the data resolution.

    The appropriate dashboard is displayed, showing multiple charts of data.

  3. Optional: Select a different time range to display or refresh rate for the data from the Time Range and Refresh Interval menus.

For information on the dashboard charts, see About the OpenShift Logging dashboard and About the Logging/Elastisearch Nodes dashboard.

8.3.2. About the OpenShift Logging dashboard

The OpenShift Logging dashboard contains charts that show details about your Elasticsearch instance at a cluster-level that you can use to diagnose and anticipate problems.

Table 8.1. OpenShift Logging charts
MetricDescription

Elastic Cluster Status

The current Elasticsearch status:

  • ONLINE - Indicates that the Elasticsearch instance is online.
  • OFFLINE - Indicates that the Elasticsearch instance is offline.

Elastic Nodes

The total number of Elasticsearch nodes in the Elasticsearch instance.

Elastic Shards

The total number of Elasticsearch shards in the Elasticsearch instance.

Elastic Documents

The total number of Elasticsearch documents in the Elasticsearch instance.

Total Index Size on Disk

The total disk space that is being used for the Elasticsearch indices.

Elastic Pending Tasks

The total number of Elasticsearch changes that have not been completed, such as index creation, index mapping, shard allocation, or shard failure.

Elastic JVM GC time

The amount of time that the JVM spent executing Elasticsearch garbage collection operations in the cluster.

Elastic JVM GC Rate

The total number of times that JVM executed garbage activities per second.

Elastic Query/Fetch Latency Sum

  • Query latency: The average time each Elasticsearch search query takes to execute.
  • Fetch latency: The average time each Elasticsearch search query spends fetching data.

Fetch latency typically takes less time than query latency. If fetch latency is consistently increasing, it might indicate slow disks, data enrichment, or large requests with too many results.

Elastic Query Rate

The total queries executed against the Elasticsearch instance per second for each Elasticsearch node.

CPU

The amount of CPU used by Elasticsearch, Fluentd, and Kibana, shown for each component.

Elastic JVM Heap Used

The amount of JVM memory used. In a healthy cluster, the graph shows regular drops as memory is freed by JVM garbage collection.

Elasticsearch Disk Usage

The total disk space used by the Elasticsearch instance for each Elasticsearch node.

File Descriptors In Use

The total number of file descriptors used by Elasticsearch, Fluentd, and Kibana.

FluentD emit count

The total number of Fluentd messages per second for the Fluentd default output, and the retry count for the default output.

FluentD Buffer Usage

The percent of the Fluentd buffer that is being used for chunks. A full buffer might indicate that Fluentd is not able to process the number of logs received.

Elastic rx bytes

The total number of bytes that Elasticsearch has received from FluentD, the Elasticsearch nodes, and other sources.

Elastic Index Failure Rate

The total number of times per second that an Elasticsearch index fails. A high rate might indicate an issue with indexing.

FluentD Output Error Rate

The total number of times per second that FluentD is not able to output logs.

8.3.3. Charts on the Logging/Elasticsearch nodes dashboard

The Logging/Elasticsearch Nodes dashboard contains charts that show details about your Elasticsearch instance, many at node-level, for further diagnostics.

Elasticsearch status
The Logging/Elasticsearch Nodes dashboard contains the following charts about the status of your Elasticsearch instance.
Table 8.2. Elasticsearch status fields
MetricDescription

Cluster status

The cluster health status during the selected time period, using the Elasticsearch green, yellow, and red statuses:

  • 0 - Indicates that the Elasticsearch instance is in green status, which means that all shards are allocated.
  • 1 - Indicates that the Elasticsearch instance is in yellow status, which means that replica shards for at least one shard are not allocated.
  • 2 - Indicates that the Elasticsearch instance is in red status, which means that at least one primary shard and its replicas are not allocated.

Cluster nodes

The total number of Elasticsearch nodes in the cluster.

Cluster data nodes

The number of Elasticsearch data nodes in the cluster.

Cluster pending tasks

The number of cluster state changes that are not finished and are waiting in a cluster queue, for example, index creation, index deletion, or shard allocation. A growing trend indicates that the cluster is not able to keep up with changes.

Elasticsearch cluster index shard status
Each Elasticsearch index is a logical group of one or more shards, which are basic units of persisted data. There are two types of index shards: primary shards, and replica shards. When a document is indexed into an index, it is stored in one of its primary shards and copied into every replica of that shard. The number of primary shards is specified when the index is created, and the number cannot change during index lifetime. You can change the number of replica shards at any time.

The index shard can be in several states depending on its lifecycle phase or events occurring in the cluster. When the shard is able to perform search and indexing requests, the shard is active. If the shard cannot perform these requests, the shard is non–active. A shard might be non-active if the shard is initializing, reallocating, unassigned, and so forth.

Index shards consist of a number of smaller internal blocks, called index segments, which are physical representations of the data. An index segment is a relatively small, immutable Lucene index that is created when Lucene commits newly-indexed data. Lucene, a search library used by Elasticsearch, merges index segments into larger segments in the background to keep the total number of segments low. If the process of merging segments is slower than the speed at which new segments are created, it could indicate a problem.

When Lucene performs data operations, such as a search operation, Lucene performs the operation against the index segments in the relevant index. For that purpose, each segment contains specific data structures that are loaded in the memory and mapped. Index mapping can have a significant impact on the memory used by segment data structures.

The Logging/Elasticsearch Nodes dashboard contains the following charts about the Elasticsearch index shards.

Table 8.3. Elasticsearch cluster shard status charts
MetricDescription

Cluster active shards

The number of active primary shards and the total number of shards, including replicas, in the cluster. If the number of shards grows higher, the cluster performance can start degrading.

Cluster initializing shards

The number of non-active shards in the cluster. A non-active shard is one that is initializing, being reallocated to a different node, or is unassigned. A cluster typically has non–active shards for short periods. A growing number of non–active shards over longer periods could indicate a problem.

Cluster relocating shards

The number of shards that Elasticsearch is relocating to a new node. Elasticsearch relocates nodes for multiple reasons, such as high memory use on a node or after a new node is added to the cluster.

Cluster unassigned shards

The number of unassigned shards. Elasticsearch shards might be unassigned for reasons such as a new index being added or the failure of a node.

Elasticsearch node metrics
Each Elasticsearch node has a finite amount of resources that can be used to process tasks. When all the resources are being used and Elasticsearch attempts to perform a new task, Elasticsearch puts the tasks into a queue until some resources become available.

The Logging/Elasticsearch Nodes dashboard contains the following charts about resource usage for a selected node and the number of tasks waiting in the Elasticsearch queue.

Table 8.4. Elasticsearch node metric charts
MetricDescription

ThreadPool tasks

The number of waiting tasks in individual queues, shown by task type. A long–term accumulation of tasks in any queue could indicate node resource shortages or some other problem.

CPU usage

The amount of CPU being used by the selected Elasticsearch node as a percentage of the total CPU allocated to the host container.

Memory usage

The amount of memory being used by the selected Elasticsearch node.

Disk usage

The total disk space being used for index data and metadata on the selected Elasticsearch node.

Documents indexing rate

The rate that documents are indexed on the selected Elasticsearch node.

Indexing latency

The time taken to index the documents on the selected Elasticsearch node. Indexing latency can be affected by many factors, such as JVM Heap memory and overall load. A growing latency indicates a resource capacity shortage in the instance.

Search rate

The number of search requests run on the selected Elasticsearch node.

Search latency

The time taken to complete search requests on the selected Elasticsearch node. Search latency can be affected by many factors. A growing latency indicates a resource capacity shortage in the instance.

Documents count (with replicas)

The number of Elasticsearch documents stored on the selected Elasticsearch node, including documents stored in both the primary shards and replica shards that are allocated on the node.

Documents deleting rate

The number of Elasticsearch documents being deleted from any of the index shards that are allocated to the selected Elasticsearch node.

Documents merging rate

The number of Elasticsearch documents being merged in any of index shards that are allocated to the selected Elasticsearch node.

Elasticsearch node fielddata
Fielddata is an Elasticsearch data structure that holds lists of terms in an index and is kept in the JVM Heap. Because fielddata building is an expensive operation, Elasticsearch caches the fielddata structures. Elasticsearch can evict a fielddata cache when the underlying index segment is deleted or merged, or if there is not enough JVM HEAP memory for all the fielddata caches.

The Logging/Elasticsearch Nodes dashboard contains the following charts about Elasticsearch fielddata.

Table 8.5. Elasticsearch node fielddata charts
MetricDescription

Fielddata memory size

The amount of JVM Heap used for the fielddata cache on the selected Elasticsearch node.

Fielddata evictions

The number of fielddata structures that were deleted from the selected Elasticsearch node.

Elasticsearch node query cache
If the data stored in the index does not change, search query results are cached in a node-level query cache for reuse by Elasticsearch.

The Logging/Elasticsearch Nodes dashboard contains the following charts about the Elasticsearch node query cache.

Table 8.6. Elasticsearch node query charts
MetricDescription

Query cache size

The total amount of memory used for the query cache for all the shards allocated to the selected Elasticsearch node.

Query cache evictions

The number of query cache evictions on the selected Elasticsearch node.

Query cache hits

The number of query cache hits on the selected Elasticsearch node.

Query cache misses

The number of query cache misses on the selected Elasticsearch node.

Elasticsearch index throttling
When indexing documents, Elasticsearch stores the documents in index segments, which are physical representations of the data. At the same time, Elasticsearch periodically merges smaller segments into a larger segment as a way to optimize resource use. If the indexing is faster then the ability to merge segments, the merge process does not complete quickly enough, which can lead to issues with searches and performance. To prevent this situation, Elasticsearch throttles indexing, typically by reducing the number of threads allocated to indexing down to a single thread.

The Logging/Elasticsearch Nodes dashboard contains the following charts about Elasticsearch index throttling.

Table 8.7. Index throttling charts
MetricDescription

Indexing throttling

The amount of time that Elasticsearch has been throttling the indexing operations on the selected Elasticsearch node.

Merging throttling

The amount of time that Elasticsearch has been throttling the segment merge operations on the selected Elasticsearch node.

Node JVM Heap statistics
The Logging/Elasticsearch Nodes dashboard contains the following charts about JVM Heap operations.
Table 8.8. JVM Heap statistic charts
MetricDescription

Heap used

The amount of the total allocated JVM Heap space that is used on the selected Elasticsearch node.

GC count

The number of garbage collection operations that have been run on the selected Elasticsearch node, by old and young garbage collection.

GC time

The amount of time that the JVM spent running garbage collection operations on the selected Elasticsearch node, by old and young garbage collection.

8.4. Log visualization with Kibana

If you are using the ElasticSearch log store, you can use the Kibana console to visualize collected log data.

Using Kibana, you can do the following with your data:

  • Search and browse the data using the Discover tab.
  • Chart and map the data using the Visualize tab.
  • Create and view custom dashboards using the Dashboard tab.

Use and configuration of the Kibana interface is beyond the scope of this documentation. For more information about using the interface, see the Kibana documentation.

Note

The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs.

8.4.1. Defining Kibana index patterns

An index pattern defines the Elasticsearch indices that you want to visualize. To explore and visualize data in Kibana, you must create an index pattern.

Prerequisites

  • A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices.

    If you can view the pods and logs in the default, kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:

    $ oc auth can-i get pods --subresource log -n <project>

    Example output

    yes

    Note

    The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs.

  • Elasticsearch documents must be indexed before you can create index patterns. This is done automatically, but it might take a few minutes in a new or updated cluster.

Procedure

To define index patterns and create visualizations in Kibana:

  1. In the OpenShift Container Platform console, click the Application Launcher app launcher and select Logging.
  2. Create your Kibana index patterns by clicking Management Index Patterns Create index pattern:

    • Each user must manually create index patterns when logging into Kibana the first time to see logs for their projects. Users must create an index pattern named app and use the @timestamp time field to view their container logs.
    • Each admin user must create index patterns when logged into Kibana the first time for the app, infra, and audit indices using the @timestamp time field.
  3. Create Kibana Visualizations from the new index patterns.

8.4.2. Viewing cluster logs in Kibana

You view cluster logs in the Kibana web console. The methods for viewing and visualizing your data in Kibana that are beyond the scope of this documentation. For more information, refer to the Kibana documentation.

Prerequisites

  • The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
  • Kibana index patterns must exist.
  • A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices.

    If you can view the pods and logs in the default, kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:

    $ oc auth can-i get pods --subresource log -n <project>

    Example output

    yes

    Note

    The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs.

Procedure

To view logs in Kibana:

  1. In the OpenShift Container Platform console, click the Application Launcher app launcher and select Logging.
  2. Log in using the same credentials you use to log in to the OpenShift Container Platform console.

    The Kibana interface launches.

  3. In Kibana, click Discover.
  4. Select the index pattern you created from the drop-down menu in the top-left corner: app, audit, or infra.

    The log data displays as time-stamped documents.

  5. Expand one of the time-stamped documents.
  6. Click the JSON tab to display the log entry for that document.

    Example 8.1. Sample infrastructure log entry in Kibana

    {
      "_index": "infra-000001",
      "_type": "_doc",
      "_id": "YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3",
      "_version": 1,
      "_score": null,
      "_source": {
        "docker": {
          "container_id": "f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1"
        },
        "kubernetes": {
          "container_name": "registry-server",
          "namespace_name": "openshift-marketplace",
          "pod_name": "redhat-marketplace-n64gc",
          "container_image": "registry.redhat.io/redhat/redhat-marketplace-index:v4.7",
          "container_image_id": "registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f",
          "pod_id": "8f594ea2-c866-4b5c-a1c8-a50756704b2a",
          "host": "ip-10-0-182-28.us-east-2.compute.internal",
          "master_url": "https://kubernetes.default.svc",
          "namespace_id": "3abab127-7669-4eb3-b9ef-44c04ad68d38",
          "namespace_labels": {
            "openshift_io/cluster-monitoring": "true"
          },
          "flat_labels": [
            "catalogsource_operators_coreos_com/update=redhat-marketplace"
          ]
        },
        "message": "time=\"2020-09-23T20:47:03Z\" level=info msg=\"serving registry\" database=/database/index.db port=50051",
        "level": "unknown",
        "hostname": "ip-10-0-182-28.internal",
        "pipeline_metadata": {
          "collector": {
            "ipaddr4": "10.0.182.28",
            "inputname": "fluent-plugin-systemd",
            "name": "fluentd",
            "received_at": "2020-09-23T20:47:15.007583+00:00",
            "version": "1.7.4 1.6.0"
          }
        },
        "@timestamp": "2020-09-23T20:47:03.422465+00:00",
        "viaq_msg_id": "YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3",
        "openshift": {
          "labels": {
            "logging": "infra"
          }
        }
      },
      "fields": {
        "@timestamp": [
          "2020-09-23T20:47:03.422Z"
        ],
        "pipeline_metadata.collector.received_at": [
          "2020-09-23T20:47:15.007Z"
        ]
      },
      "sort": [
        1600894023422
      ]
    }

8.4.3. Configuring Kibana

You can configure using the Kibana console by modifying the ClusterLogging custom resource (CR).

8.4.3.1. Configuring CPU and memory limits

The logging components allow for adjustments to both the CPU and memory limits.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc -n openshift-logging edit ClusterLogging instance
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
      namespace: openshift-logging
    
    ...
    
    spec:
      managementState: "Managed"
      logStore:
        type: "elasticsearch"
        elasticsearch:
          nodeCount: 3
          resources: 1
            limits:
              memory: 16Gi
            requests:
              cpu: 200m
              memory: 16Gi
          storage:
            storageClassName: "gp2"
            size: "200G"
          redundancyPolicy: "SingleRedundancy"
      visualization:
        type: "kibana"
        kibana:
          resources: 2
            limits:
              memory: 1Gi
            requests:
              cpu: 500m
              memory: 1Gi
          proxy:
            resources: 3
              limits:
                memory: 100Mi
              requests:
                cpu: 100m
                memory: 100Mi
          replicas: 2
    collection:
        resources: 4
          limits:
            memory: 736Mi
          requests:
            cpu: 200m
            memory: 736Mi
        type: fluentd
    1
    Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value.
    2 3
    Specify the CPU and memory limits and requests for the log visualizer as needed.
    4
    Specify the CPU and memory limits and requests for the log collector as needed.

8.4.3.2. Scaling redundancy for the log visualizer nodes

You can scale the pod that hosts the log visualizer for redundancy.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    $ oc edit ClusterLogging instance
    
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
        visualization:
          type: "kibana"
          kibana:
            replicas: 1 1
    1
    Specify the number of Kibana nodes.
Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.