Este conteúdo não está disponível no idioma selecionado.
Chapter 2. Configuring the logging collector
Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the spec.collection stanza in the ClusterLogForwarder custom resource (CR).
2.1. Creating a LogFileMetricExporter resource Copiar o linkLink copiado para a área de transferência!
You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers, because it is not deployed with the collector by default.
If you do not create the LogFileMetricExporter CR, you might see a No datapoints found message in the OpenShift Container Platform web console dashboard for the Produced Logs field.
Prerequisites
- You have administrator permissions.
- You have installed the Red Hat OpenShift Logging Operator.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a
LogFileMetricExporterCR as a YAML file:Example
LogFileMetricExporterCRapiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {}1 resources:2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: []3 # ...Apply the
LogFileMetricExporterCR by running the following command:$ oc apply -f <filename>.yaml
Verification
Verify that the
logfilesmetricexporterpods are running in the namespace where you have created theLogFileMetricExporterCR, by running the following command and observing the output:$ oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-loggingExample output
NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46sA
logfilesmetricexporterpod runs concurrently with acollectorpod on each node.
2.2. Configure log collector CPU and memory limits Copiar o linkLink copiado para a área de transferência!
You can adjust both the CPU and memory limits for the log collector by editing the ClusterLogForwarder custom resource (CR).
Procedure
Edit the
ClusterLogForwarderCR in theopenshift-loggingproject:$ oc -n openshift-logging edit clusterlogforwarder.observability.openshift.io <clf_name>apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <clf_name>1 namespace: openshift-logging spec: collector: resources:2 requests: memory: 736Mi limits: cpu: 100m memory: 736Mi # ...
2.3. Configuring input receivers Copiar o linkLink copiado para a área de transferência!
The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder ClusterLogForwarder CR deployments, the service name is in the <clusterlogforwarder_resource_name>-<input_name> format.
2.3.1. Configuring the collector to receive audit logs as an HTTP server Copiar o linkLink copiado para a área de transferência!
You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR).
HTTP receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:
- OpenShift Virtualization
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
Procedure
Modify the
ClusterLogForwarderCR to add configuration for thehttpreceiver input:Example
ClusterLogForwarderCRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <clusterlogforwarder_name>1 namespace: <namespace> # ... spec: serviceAccount: name: <service_account_name> inputs: - name: http-receiver2 type: receiver receiver: type: http3 port: 84434 http: format: kubeAPIAudit5 outputs: - name: <output_name> type: http http: url: <url> pipelines:6 - name: http-pipeline inputRefs: - http-receiver outputRefs: - <output_name> # ...- 1
- Specify a name for the
ClusterLogForwarderCR. - 2
- Specify a name for your input receiver.
- 3
- Specify the input receiver type as
http. - 4
- Optional: Specify the port that the input receiver listens on. This must be a value between
1024and65535. The default value is8443if this is not specified. - 5
- Currently, only the
kube-apiserverwebhook format is supported forhttpinput receivers. - 6
- Configure a pipeline for your input receiver.
Apply the changes to the
ClusterLogForwarderCR by running the following command:$ oc apply -f <filename>.yamlVerify that the collector is listening on the service that has a name in the
<clusterlogforwarder_resource_name>-<input_name>format by running the following command:$ oc get svcExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6sIn the example, the service name is
collector-http-receiver.
Verification
Extract the certificate authority (CA) certificate file by running the following command:
$ oc extract cm/openshift-service-ca.crt -n <namespace>NoteIf the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.
As an example, use the
curlcommand to send logs by running the following command:$ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'Replace <openshift_service_ca.crt> with the extracted CA certificate file.
2.3.2. Configuring the collector to listen for connections as a syslog server Copiar o linkLink copiado para a área de transferência!
You can configure your log collector to collect journal format infrastructure logs by specifying syslog as a receiver input in the ClusterLogForwarder custom resource (CR).
Syslog receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:
- Red Hat OpenStack Services on OpenShift (RHOSO)
- OpenShift Virtualization
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
Procedure
Grant the
collect-infrastructure-logscluster role to the service account by running the following command:Example binding command
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollectorModify the
ClusterLogForwarderCR to add configuration for thesyslogreceiver input:Example
ClusterLogForwarderCRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <clusterlogforwarder_name>1 namespace: <namespace> # ... spec: serviceAccount: name: <service_account_name>2 inputs: - name: syslog-receiver3 type: receiver receiver: type: syslog4 port: 105145 outputs: - name: <output_name> lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls:6 ca: key: service-ca.crt configMapName: openshift-service-ca.crt type: lokiStack # ... pipelines:7 - name: syslog-pipeline inputRefs: - syslog-receiver outputRefs: - <output_name> # ...- 1 2
- Use the service account that you granted the
collect-infrastructure-logspermission in the previous step. - 3
- Specify a name for your input receiver.
- 4
- Specify the input receiver type as
syslog. - 5
- Optional: Specify the port that the input receiver listens on. This must be a value between
1024and65535. - 6
- If TLS configuration is not set, the default certificates will be used. For more information, run the command
oc explain clusterlogforwarders.spec.inputs.receiver.tls. - 7
- Configure a pipeline for your input receiver.
Apply the changes to the
ClusterLogForwarderCR by running the following command:$ oc apply -f <filename>.yamlVerify that the collector is listening on the service that has a name in the
<clusterlogforwarder_resource_name>-<input_name>format by running the following command:$ oc get svcExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20sIn this example output, the service name is
collector-syslog-receiver.
Verification
Extract the certificate authority (CA) certificate file by running the following command:
$ oc extract cm/openshift-service-ca.crt -n <namespace>NoteIf the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.
As an example, use the
curlcommand to send logs by running the following command:$ curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514 “test message”Replace <openshift_service_ca.crt> with the extracted CA certificate file.