Questo contenuto non è disponibile nella lingua selezionata.

Chapter 2. Configuring the logging collector


Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the spec.collection stanza in the ClusterLogForwarder custom resource (CR).

2.1. Creating a LogFileMetricExporter resource

You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers, because it is not deployed with the collector by default.

Note

If you do not create the LogFileMetricExporter CR, you might see a No datapoints found message in the OpenShift Container Platform web console dashboard for the Produced Logs field.

Prerequisites

  • You have administrator permissions.
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a LogFileMetricExporter CR as a YAML file:

    Example LogFileMetricExporter CR

    apiVersion: logging.openshift.io/v1alpha1
    kind: LogFileMetricExporter
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      nodeSelector: {} 
    1
    
      resources: 
    2
    
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
      tolerations: [] 
    3
    
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Optional: The nodeSelector stanza defines which nodes the pods are scheduled on.
    2
    The resources stanza defines resource requirements for the LogFileMetricExporter CR.
    3
    Optional: The tolerations stanza defines the tolerations that the pods accept.
  2. Apply the LogFileMetricExporter CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the logfilesmetricexporter pods are running in the namespace where you have created the LogFileMetricExporter CR, by running the following command and observing the output:

    $ oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                           READY   STATUS    RESTARTS   AGE
    logfilesmetricexporter-9qbjj   1/1     Running   0          2m46s
    logfilesmetricexporter-cbc4v   1/1     Running   0          2m46s
    Copy to Clipboard Toggle word wrap

    A logfilesmetricexporter pod runs concurrently with a collector pod on each node.

2.2. Configure log collector CPU and memory limits

You can adjust both the CPU and memory limits for the log collector by editing the ClusterLogForwarder custom resource (CR).

Procedure

  • Edit the ClusterLogForwarder CR in the openshift-logging project:

    $ oc -n openshift-logging edit clusterlogforwarder.observability.openshift.io <clf_name>
    Copy to Clipboard Toggle word wrap
    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <clf_name> 
    1
    
      namespace: openshift-logging
    spec:
      collector:
        resources: 
    2
    
          requests:
            memory: 736Mi
          limits:
            cpu: 100m
            memory: 736Mi
    # ...
    Copy to Clipboard Toggle word wrap
    1
    Specify a name for the ClusterLogForwarder CR.
    2
    Specify the CPU and memory limits and requests as needed. The values shown are the default values.

2.3. Configuring input receivers

The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder ClusterLogForwarder CR deployments, the service name is in the <clusterlogforwarder_resource_name>-<input_name> format.

2.3.1. Configuring the collector to receive audit logs as an HTTP server

You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR).

Important

HTTP receiver input is only supported for the following scenarios:

  • Logging is installed on hosted control planes.
  • When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:

    • OpenShift Virtualization

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.

Procedure

  1. Modify the ClusterLogForwarder CR to add configuration for the http receiver input:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <clusterlogforwarder_name> 
    1
    
      namespace: <namespace>
    # ...
    spec:
      serviceAccount:
        name: <service_account_name>
      inputs:
      - name: http-receiver 
    2
    
        type: receiver
        receiver:
          type: http 
    3
    
          port: 8443 
    4
    
          http:
            format: kubeAPIAudit 
    5
    
      outputs:
      - name: <output_name>
        type: http
        http:
          url: <url>
      pipelines: 
    6
    
        - name: http-pipeline
          inputRefs:
            - http-receiver
          outputRefs:
            - <output_name>
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Specify a name for the ClusterLogForwarder CR.
    2
    Specify a name for your input receiver.
    3
    Specify the input receiver type as http.
    4
    Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535. The default value is 8443 if this is not specified.
    5
    Currently, only the kube-apiserver webhook format is supported for http input receivers.
    6
    Configure a pipeline for your input receiver.
  2. Apply the changes to the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  3. Verify that the collector is listening on the service that has a name in the <clusterlogforwarder_resource_name>-<input_name> format by running the following command:

    $ oc get svc
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
    collector                 ClusterIP   172.30.85.239    <none>        24231/TCP          3m6s
    collector-http-receiver   ClusterIP   172.30.205.160   <none>        8443/TCP           3m6s
    Copy to Clipboard Toggle word wrap

    In the example, the service name is collector-http-receiver.

Verification

  1. Extract the certificate authority (CA) certificate file by running the following command:

    $ oc extract cm/openshift-service-ca.crt -n <namespace>
    Copy to Clipboard Toggle word wrap
    Note

    If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.

  2. As an example, use the curl command to send logs by running the following command:

    $ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'
    Copy to Clipboard Toggle word wrap

    Replace <openshift_service_ca.crt> with the extracted CA certificate file.

2.3.2. Configuring the collector to listen for connections as a syslog server

You can configure your log collector to collect journal format infrastructure logs by specifying syslog as a receiver input in the ClusterLogForwarder custom resource (CR).

Important

Syslog receiver input is only supported for the following scenarios:

  • Logging is installed on hosted control planes.
  • When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:

    • Red Hat OpenStack Services on OpenShift (RHOSO)
    • OpenShift Virtualization

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.

Procedure

  1. Grant the collect-infrastructure-logs cluster role to the service account by running the following command:

    Example binding command

    $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector
    Copy to Clipboard Toggle word wrap

  2. Modify the ClusterLogForwarder CR to add configuration for the syslog receiver input:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <clusterlogforwarder_name> 
    1
    
      namespace: <namespace>
    # ...
    spec:
      serviceAccount:
        name: <service_account_name> 
    2
    
      inputs:
        - name: syslog-receiver 
    3
    
          type: receiver
          receiver:
            type: syslog 
    4
    
            port: 10514 
    5
    
      outputs:
      - name: <output_name>
        lokiStack:
          authentication:
            token:
              from: serviceAccount
          target:
            name: logging-loki
            namespace: openshift-logging
        tls: 
    6
    
          ca:
            key: service-ca.crt
            configMapName: openshift-service-ca.crt
        type: lokiStack
    # ...
      pipelines: 
    7
    
        - name: syslog-pipeline
          inputRefs:
            - syslog-receiver
          outputRefs:
            - <output_name>
    # ...
    Copy to Clipboard Toggle word wrap

    1 2
    Use the service account that you granted the collect-infrastructure-logs permission in the previous step.
    3
    Specify a name for your input receiver.
    4
    Specify the input receiver type as syslog.
    5
    Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535.
    6
    If TLS configuration is not set, the default certificates will be used. For more information, run the command oc explain clusterlogforwarders.spec.inputs.receiver.tls.
    7
    Configure a pipeline for your input receiver.
  3. Apply the changes to the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the collector is listening on the service that has a name in the <clusterlogforwarder_resource_name>-<input_name> format by running the following command:

    $ oc get svc
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
    collector                   ClusterIP   172.30.85.239    <none>        24231/TCP          33m
    collector-syslog-receiver   ClusterIP   172.30.216.142   <none>        10514/TCP          2m20s
    Copy to Clipboard Toggle word wrap

    In this example output, the service name is collector-syslog-receiver.

Verification

  1. Extract the certificate authority (CA) certificate file by running the following command:

    $ oc extract cm/openshift-service-ca.crt -n <namespace>
    Copy to Clipboard Toggle word wrap
    Note

    If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.

  2. As an example, use the curl command to send logs by running the following command:

    $ curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514  “test message”
    Copy to Clipboard Toggle word wrap

    Replace <openshift_service_ca.crt> with the extracted CA certificate file.

2.4. Configuring pod rollout strategy

You can configure the collector pods rollout strategy so that the requests to the API server are minimized during upgrades and restarts.

Note

Unlike in previous releases, API server kube-api caching is always enabled.

Prerequisites

  • You have administrator permissions.
  • You installed the OpenShift CLI (oc).
  • You installed and configured Red Hat OpenShift Logging Operator.
  • You have created a ClusterLogForwarder custom resource (CR).

Procedure

  1. Update the ClusterLogForwarder CR:

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: my-forwarder
      namespace: openshift-logging
    spec:
      collector:
        maxUnavailable: <value>
    Copy to Clipboard Toggle word wrap

    Where <value> can be a number or percentage. If you do not specify a value, the default value of 100% is used for the maxUnavailable field.

  2. Apply the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

2.5. Network policies to override restrictive network in a cluster

Red Hat OpenShift Logging Operator optionally provides permissive NetworkPolicy resources to override any restrictive network policies present in an OpenShift Container Platform cluster.

The NetworkPolicy resource ensures that all ingress and egress traffic is allowed even when the following restrictions are in place:

  • Restrictive default NetworkPolicies resource has been defined in the cluster.
  • AdminNetworkPolicy configuration that limits pod communications is applied.
  • Namespace-level network restrictions are defined.

Red Hat OpenShift Logging Operator provides permissive NetworkPolicy resources for the collector and the LogFileMetricExporter.

2.5.1. Collector network policy

Red Hat OpenShift Logging Operator creates a NetworkPolicy resource for the collector pods when a ClusterLogForwarder resource is deployed if you specify a network policy rule set in the ClusterLogForwarder custom resource (CR). NetworkPolicy resources are automatically created, updated, and removed along with the component deployment lifecycle.

You can specify the network policy rule set by defining the networkPolicy.ruleSet field in the collector specification. The collector supports the following network policy rule set types:

AllowAllIngressEgress
Allows all ingress and egress traffic.
RestrictIngressEgress
Restricts traffic to specific ports based on the configured inputs, outputs, and the metrics port.

If you do not define a spec.collector.networkPolicy field, the Operator will not create a NetworkPolicy resource for the collector. Without a NetworkPolicy resource, logging components might not function if a cluster-wide AdminNetworkPolicy restricts traffic or other restrictive network policies are in place.

2.5.2. Configuring network policy rule set for a collector

Define a network policy rule set so that Red Hat OpenShift Logging Operator creates a NetworkPolicy resource for the collector pods when the ClusterLogForwarder resource is deployed. The NetworkPolicy resource overrides network restrictions to ensure that the permitted ingress and egress traffic for the collector pods is allowed to flow.

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have created a ClusterLogForwarder custom resource (CR).

Procedure

  1. Update the ClusterLogForwarder CR:

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: my-forwarder
      namespace: openshift-logging
    spec:
      collector:
        networkPolicy:
          ruleSet: <RestrictIngressEgress_or_AllowAllIngressEgress>
      # ...
    Copy to Clipboard Toggle word wrap

    Where the value for the ruleSet field can be either RestrictIngressEgress or AllowAllIngressEgress.

  2. Apply the configuration by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

2.5.3. LogFileMetricExporter network policy

Red Hat OpenShift Logging Operator creates a NetworkPolicy resource for the metric exporter pods when a LogFileMetricExporter resource is deployed if you specify a network policy rule set in the LogFileMetricExporter custom resource (CR). NetworkPolicy resources are automatically created, updated, and removed along with the component deployment lifecycle.

The LogFileMetricExporter resource supports the following network policy rule set types:

AllowIngressMetrics
Allows ingress traffic only on the metrics port, denies all egress traffic.
AllowAllIngressEgress
Allows all ingress and egress traffic.

If you do not define a spec.networkPolicy field, the Operator will not create a NetworkPolicy resource for the LogFileMetricExporter. Without a NetworkPolicy resource, logging components might not function if a cluster-wide AdminNetworkPolicy restricts traffic or other restrictive network policies are in place.

2.5.4. Configuring network policy rule set for LogFileMetricExporter

Define a network policy rule set so that Red Hat OpenShift Logging Operator creates a NetworkPolicy resource for the LogFileMetricExporter pods when the LogFileMetricExporter resource is deployed. The NetworkPolicy resource overrides network restrictions to ensure that the permitted ingress and egress traffic for the LogFileMetricExporter pods is allowed to flow.

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have created a LogFileMetricExporter CR.

Procedure

  1. Update the LogFileMetricExporter CR:

    apiVersion: logging.openshift.io/v1alpha1
    kind: LogFileMetricExporter
    metadata:
      name: my-lfme
      namespace: openshift-logging
    spec:
      networkPolicy:
        ruleSet: <AllowAllIngressEgress_or_AllowIngressMetrics>
    #...
    Copy to Clipboard Toggle word wrap

    Where the value for the ruleSet field can be either AllowAllIngressEgress or AllowIngressMetrics.

  2. Apply the configuration by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

To ensure that the pods running resources managed by Red Hat OpenShift Logging Operator can communicate when an AdminNetworkPolicy configuration is blocking traffic, create an AdminNetworkPolicy rule that delegates to a NetworkPolicy definition.

OpenShift Container Platform network policy precedence:

  • AdminNetworkPolicy: Cluster-admin controlled, highest priority.
  • BaselineAdminNetworkPolicy: Default fallback rules.
  • NetworkPolicy: Namespace-level policies. This is where collector policies reside.

Prerequisites

  • You have administrator permissions.
  • You installed the OpenShift CLI (oc).

Procedure

  1. Create an AdminNetworkPolicy rule:

    apiVersion: policy.networking.k8s.io/v1alpha1
    kind: AdminNetworkPolicy
    metadata:
      name: allow-logging-collector-delegation
    spec:
      priority: 50 # Adjust based on your cluster's ANP priority scheme. Lower number means higher priority
      subject:
        pods: # Target the pods in openshift-logging namespace
          namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: openshift-logging
          podSelector:
            matchLabels:
              app.kubernetes.io/name: <name>
              app.kubernetes.io/instance: <instance>
              app.kubernetes.io/managed-by: cluster-logging-operator
              app.kubernetes.io/part-of: cluster-logging
              app.kubernetes.io/component: <component>
      ingress:
        - name: "delegate-to-collector-ingress"
          action: "Pass"
          from:
            # Select all pods in all namespaces as the source.
            - pods:
                namespaceSelector: {}
                podSelector: {}
      egress:
        - name: "delegate-to-collector-egress"
          action: "Pass"
          to:
            # Select all pods in all namespaces for internal traffic.
            - pods:
                namespaceSelector: {}
                podSelector: {}
            # Select all external destinations for egress traffic.
            - networks:
              - "0.0.0.0/0"
              - "::/0"
    Copy to Clipboard Toggle word wrap

    To apply this network policy to all the resources in a namespace, remove app.kubernetes.io/name, app.kubernetes.io/instance, and app.kubernetes.io/component fields. To apply the admin network policy to specific resources, define the fields as follows:

    • Replace <name> with the name of the collector or the LogFileMetricExporter resource.
    • Replace <instance> with the instance name of the collector or the LogFileMetricExporter resource.
    • Replace <component> with the value collector or logfilesmetricexporter.
  2. Apply the AdminNetworkPolicy rule by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat