Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 2. Configuring the logging collector


Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the spec.collection stanza in the ClusterLogForwarder custom resource (CR).

2.1. Creating a LogFileMetricExporter resource

You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers, because it is not deployed with the collector by default.

Note

If you do not create the LogFileMetricExporter CR, you might see a No datapoints found message in the OpenShift Container Platform web console dashboard for the Produced Logs field.

Prerequisites

  • You have administrator permissions.
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a LogFileMetricExporter CR as a YAML file:

    Example LogFileMetricExporter CR

    apiVersion: logging.openshift.io/v1alpha1
    kind: LogFileMetricExporter
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      nodeSelector: {} 
    1
    
      resources: 
    2
    
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
      tolerations: [] 
    3
    
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Optional: The nodeSelector stanza defines which nodes the pods are scheduled on.
    2
    The resources stanza defines resource requirements for the LogFileMetricExporter CR.
    3
    Optional: The tolerations stanza defines the tolerations that the pods accept.
  2. Apply the LogFileMetricExporter CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the logfilesmetricexporter pods are running in the namespace where you have created the LogFileMetricExporter CR, by running the following command and observing the output:

    $ oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                           READY   STATUS    RESTARTS   AGE
    logfilesmetricexporter-9qbjj   1/1     Running   0          2m46s
    logfilesmetricexporter-cbc4v   1/1     Running   0          2m46s
    Copy to Clipboard Toggle word wrap

    A logfilesmetricexporter pod runs concurrently with a collector pod on each node.

2.2. Configure log collector CPU and memory limits

You can adjust both the CPU and memory limits for the log collector by editing the ClusterLogForwarder custom resource (CR).

Procedure

  • Edit the ClusterLogForwarder CR in the openshift-logging project:

    $ oc -n openshift-logging edit ClusterLogging instance
    Copy to Clipboard Toggle word wrap
    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <clf_name> 
    1
    
      namespace: openshift-logging
    spec:
      collector:
        resources: 
    2
    
          requests:
            memory: 736Mi
          limits:
            cpu: 100m
            memory: 736Mi
    # ...
    Copy to Clipboard Toggle word wrap
    1
    Specify a name for the ClusterLogForwarder CR.
    2
    Specify the CPU and memory limits and requests as needed. The values shown are the default values.

2.3. Configuring input receivers

The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder ClusterLogForwarder CR deployments, the service name is in the <clusterlogforwarder_resource_name>-<input_name> format.

You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR).

Important

HTTP receiver input is only supported for the following scenarios:

  • Logging is installed on hosted control planes.
  • When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:

    • OpenShift Virtualization

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.

Procedure

  1. Modify the ClusterLogForwarder CR to add configuration for the http receiver input:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <clusterlogforwarder_name> 
    1
    
      namespace: <namespace>
    # ...
    spec:
      serviceAccount:
        name: <service_account_name>
      inputs:
      - name: http-receiver 
    2
    
        type: receiver
        receiver:
          type: http 
    3
    
          port: 8443 
    4
    
          http:
            format: kubeAPIAudit 
    5
    
      outputs:
      - name: <output_name>
        type: http
        http:
          url: <url>
      pipelines: 
    6
    
        - name: http-pipeline
          inputRefs:
            - http-receiver
          outputRefs:
            - <output_name>
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Specify a name for the ClusterLogForwarder CR.
    2
    Specify a name for your input receiver.
    3
    Specify the input receiver type as http.
    4
    Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535. The default value is 8443 if this is not specified.
    5
    Currently, only the kube-apiserver webhook format is supported for http input receivers.
    6
    Configure a pipeline for your input receiver.
  2. Apply the changes to the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  3. Verify that the collector is listening on the service that has a name in the <clusterlogforwarder_resource_name>-<input_name> format by running the following command:

    $ oc get svc
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
    collector                 ClusterIP   172.30.85.239    <none>        24231/TCP          3m6s
    collector-http-receiver   ClusterIP   172.30.205.160   <none>        8443/TCP           3m6s
    Copy to Clipboard Toggle word wrap

    In the example, the service name is collector-http-receiver.

Verification

  1. Extract the certificate authority (CA) certificate file by running the following command:

    $ oc extract cm/openshift-service-ca.crt -n <namespace>
    Copy to Clipboard Toggle word wrap
    Note

    If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.

  2. As an example, use the curl command to send logs by running the following command:

    $ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'
    Copy to Clipboard Toggle word wrap

    Replace <openshift_service_ca.crt> with the extracted CA certificate file.

You can configure your log collector to collect journal format infrastructure logs by specifying syslog as a receiver input in the ClusterLogForwarder custom resource (CR).

Important

Syslog receiver input is only supported for the following scenarios:

  • Logging is installed on hosted control planes.
  • When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:

    • Red Hat OpenStack Services on OpenShift (RHOSO)
    • OpenShift Virtualization

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.

Procedure

  1. Grant the collect-infrastructure-logs cluster role to the service account by running the following command:

    Example binding command

    $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector
    Copy to Clipboard Toggle word wrap

  2. Modify the ClusterLogForwarder CR to add configuration for the syslog receiver input:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <clusterlogforwarder_name> 
    1
    
      namespace: <namespace>
    # ...
    spec:
      serviceAccount:
        name: <service_account_name> 
    2
    
      inputs:
        - name: syslog-receiver 
    3
    
          type: receiver
          receiver:
            type: syslog 
    4
    
            port: 10514 
    5
    
      outputs:
      - name: <output_name>
        lokiStack:
          authentication:
            token:
              from: serviceAccount
          target:
            name: logging-loki
            namespace: openshift-logging
        tls: 
    6
    
          ca:
            key: service-ca.crt
            configMapName: openshift-service-ca.crt
        type: lokiStack
    # ...
      pipelines: 
    7
    
        - name: syslog-pipeline
          inputRefs:
            - syslog-receiver
          outputRefs:
            - <output_name>
    # ...
    Copy to Clipboard Toggle word wrap

    1 2
    Use the service account that you granted the collect-infrastructure-logs permission in the previous step.
    3
    Specify a name for your input receiver.
    4
    Specify the input receiver type as syslog.
    5
    Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535.
    6
    If TLS configuration is not set, the default certificates will be used. For more information, run the command oc explain clusterlogforwarders.spec.inputs.receiver.tls.
    7
    Configure a pipeline for your input receiver.
  3. Apply the changes to the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the collector is listening on the service that has a name in the <clusterlogforwarder_resource_name>-<input_name> format by running the following command:

    $ oc get svc
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
    collector                   ClusterIP   172.30.85.239    <none>        24231/TCP          33m
    collector-syslog-receiver   ClusterIP   172.30.216.142   <none>        10514/TCP          2m20s
    Copy to Clipboard Toggle word wrap

    In this example output, the service name is collector-syslog-receiver.

Verification

  1. Extract the certificate authority (CA) certificate file by running the following command:

    $ oc extract cm/openshift-service-ca.crt -n <namespace>
    Copy to Clipboard Toggle word wrap
    Note

    If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.

  2. As an example, use the curl command to send logs by running the following command:

    $ curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514  “test message”
    Copy to Clipboard Toggle word wrap

    Replace <openshift_service_ca.crt> with the extracted CA certificate file.

Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat