Este conteúdo não está disponível no idioma selecionado.
Chapter 2. Configuring the logging collector
			Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the spec.collection stanza in the ClusterLogForwarder custom resource (CR).
		
2.1. Creating a LogFileMetricExporter resource
				You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers, because it is not deployed with the collector by default.
			
					If you do not create the LogFileMetricExporter CR, you might see a No datapoints found message in the OpenShift Container Platform web console dashboard for the Produced Logs field.
				
Prerequisites
- You have administrator permissions.
- You have installed the Red Hat OpenShift Logging Operator.
- 
						You have installed the OpenShift CLI (oc).
Procedure
- Create a - LogFileMetricExporterCR as a YAML file:- Example - LogFileMetricExporterCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Apply the - LogFileMetricExporterCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
- Verify that the - logfilesmetricexporterpods are running in the namespace where you have created the- LogFileMetricExporterCR, by running the following command and observing the output:- oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging - $ oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s - NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - A - logfilesmetricexporterpod runs concurrently with a- collectorpod on each node.
2.2. Configure log collector CPU and memory limits
				You can adjust both the CPU and memory limits for the log collector by editing the ClusterLogForwarder custom resource (CR).
			
Procedure
- Edit the - ClusterLogForwarderCR in the- openshift-loggingproject:- oc -n openshift-logging edit ClusterLogging instance - $ oc -n openshift-logging edit ClusterLogging instance- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
2.3. Configuring input receivers
				The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder ClusterLogForwarder CR deployments, the service name is in the <clusterlogforwarder_resource_name>-<input_name> format.
			
2.3.1. Configuring the collector to receive audit logs as an HTTP server
					You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR).
				
HTTP receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
- When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example: - OpenShift Virtualization
 
Prerequisites
- You have administrator permissions.
- 
							You have installed the OpenShift CLI (oc).
- You have installed the Red Hat OpenShift Logging Operator.
Procedure
- Modify the - ClusterLogForwarderCR to add configuration for the- httpreceiver input:- Example - ClusterLogForwarderCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify a name for theClusterLogForwarderCR.
- 2
- Specify a name for your input receiver.
- 3
- Specify the input receiver type ashttp.
- 4
- Optional: Specify the port that the input receiver listens on. This must be a value between1024and65535. The default value is8443if this is not specified.
- 5
- Currently, only thekube-apiserverwebhook format is supported forhttpinput receivers.
- 6
- Configure a pipeline for your input receiver.
 
- Apply the changes to the - ClusterLogForwarderCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the collector is listening on the service that has a name in the - <clusterlogforwarder_resource_name>-<input_name>format by running the following command:- oc get svc - $ oc get svc- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6s - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6s- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - In the example, the service name is - collector-http-receiver.
Verification
- Extract the certificate authority (CA) certificate file by running the following command: - oc extract cm/openshift-service-ca.crt -n <namespace> - $ oc extract cm/openshift-service-ca.crt -n <namespace>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again. 
- As an example, use the - curlcommand to send logs by running the following command:- curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'- $ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace <openshift_service_ca.crt> with the extracted CA certificate file. 
2.3.2. Configuring the collector to listen for connections as a syslog server
					You can configure your log collector to collect journal format infrastructure logs by specifying syslog as a receiver input in the ClusterLogForwarder custom resource (CR).
				
Syslog receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
- When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example: - Red Hat OpenStack Services on OpenShift (RHOSO)
- OpenShift Virtualization
 
Prerequisites
- You have administrator permissions.
- 
							You have installed the OpenShift CLI (oc).
- You have installed the Red Hat OpenShift Logging Operator.
Procedure
- Grant the - collect-infrastructure-logscluster role to the service account by running the following command:- Example binding command - oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector - $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Modify the - ClusterLogForwarderCR to add configuration for the- syslogreceiver input:- Example - ClusterLogForwarderCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1 2
- Use the service account that you granted thecollect-infrastructure-logspermission in the previous step.
- 3
- Specify a name for your input receiver.
- 4
- Specify the input receiver type assyslog.
- 5
- Optional: Specify the port that the input receiver listens on. This must be a value between1024and65535.
- 6
- If TLS configuration is not set, the default certificates will be used. For more information, run the commandoc explain clusterlogforwarders.spec.inputs.receiver.tls.
- 7
- Configure a pipeline for your input receiver.
 
- Apply the changes to the - ClusterLogForwarderCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the collector is listening on the service that has a name in the - <clusterlogforwarder_resource_name>-<input_name>format by running the following command:- oc get svc - $ oc get svc- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20s - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20s- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - In this example output, the service name is - collector-syslog-receiver.
Verification
- Extract the certificate authority (CA) certificate file by running the following command: - oc extract cm/openshift-service-ca.crt -n <namespace> - $ oc extract cm/openshift-service-ca.crt -n <namespace>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again. 
- As an example, use the - curlcommand to send logs by running the following command:- curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514 “test message” - $ curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514 “test message”- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace <openshift_service_ca.crt> with the extracted CA certificate file.