Chapter 2. Configuring the logging collector
Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the spec.collection stanza in the ClusterLogForwarder custom resource (CR).
2.1. Creating a LogFileMetricExporter resource Copy linkLink copied to clipboard!
You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers, because it is not deployed with the collector by default.
If you do not create the LogFileMetricExporter CR, you might see a No datapoints found message in the OpenShift Container Platform web console dashboard for the Produced Logs field.
Prerequisites
- You have administrator permissions.
- You have installed the Red Hat OpenShift Logging Operator.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a
LogFileMetricExporterCR as a YAML file:Example
LogFileMetricExporterCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
LogFileMetricExporterCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
logfilesmetricexporterpods are running in the namespace where you have created theLogFileMetricExporterCR, by running the following command and observing the output:oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging
$ oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s
NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46sCopy to Clipboard Copied! Toggle word wrap Toggle overflow A
logfilesmetricexporterpod runs concurrently with acollectorpod on each node.
2.2. Configure log collector CPU and memory limits Copy linkLink copied to clipboard!
You can adjust both the CPU and memory limits for the log collector by editing the ClusterLogForwarder custom resource (CR).
Procedure
Edit the
ClusterLogForwarderCR in theopenshift-loggingproject:oc -n openshift-logging edit clusterlogforwarder.observability.openshift.io <clf_name>
$ oc -n openshift-logging edit clusterlogforwarder.observability.openshift.io <clf_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Configuring input receivers Copy linkLink copied to clipboard!
The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder ClusterLogForwarder CR deployments, the service name is in the <clusterlogforwarder_resource_name>-<input_name> format.
2.3.1. Configuring the collector to receive audit logs as an HTTP server Copy linkLink copied to clipboard!
You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR).
HTTP receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:
- OpenShift Virtualization
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
Procedure
Modify the
ClusterLogForwarderCR to add configuration for thehttpreceiver input:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the
ClusterLogForwarderCR. - 2
- Specify a name for your input receiver.
- 3
- Specify the input receiver type as
http. - 4
- Optional: Specify the port that the input receiver listens on. This must be a value between
1024and65535. The default value is8443if this is not specified. - 5
- Currently, only the
kube-apiserverwebhook format is supported forhttpinput receivers. - 6
- Configure a pipeline for your input receiver.
Apply the changes to the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the collector is listening on the service that has a name in the
<clusterlogforwarder_resource_name>-<input_name>format by running the following command:oc get svc
$ oc get svcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6sCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the example, the service name is
collector-http-receiver.
Verification
Extract the certificate authority (CA) certificate file by running the following command:
oc extract cm/openshift-service-ca.crt -n <namespace>
$ oc extract cm/openshift-service-ca.crt -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.
As an example, use the
curlcommand to send logs by running the following command:curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'$ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <openshift_service_ca.crt> with the extracted CA certificate file.
2.3.2. Configuring the collector to listen for connections as a syslog server Copy linkLink copied to clipboard!
You can configure your log collector to collect journal format infrastructure logs by specifying syslog as a receiver input in the ClusterLogForwarder custom resource (CR).
Syslog receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:
- Red Hat OpenStack Services on OpenShift (RHOSO)
- OpenShift Virtualization
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
Procedure
Grant the
collect-infrastructure-logscluster role to the service account by running the following command:Example binding command
oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
ClusterLogForwarderCR to add configuration for thesyslogreceiver input:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 2
- Use the service account that you granted the
collect-infrastructure-logspermission in the previous step. - 3
- Specify a name for your input receiver.
- 4
- Specify the input receiver type as
syslog. - 5
- Optional: Specify the port that the input receiver listens on. This must be a value between
1024and65535. - 6
- If TLS configuration is not set, the default certificates will be used. For more information, run the command
oc explain clusterlogforwarders.spec.inputs.receiver.tls. - 7
- Configure a pipeline for your input receiver.
Apply the changes to the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the collector is listening on the service that has a name in the
<clusterlogforwarder_resource_name>-<input_name>format by running the following command:oc get svc
$ oc get svcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20sCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example output, the service name is
collector-syslog-receiver.
Verification
Extract the certificate authority (CA) certificate file by running the following command:
oc extract cm/openshift-service-ca.crt -n <namespace>
$ oc extract cm/openshift-service-ca.crt -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.
As an example, use the
curlcommand to send logs by running the following command:curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514 “test message”
$ curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514 “test message”Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <openshift_service_ca.crt> with the extracted CA certificate file.
2.4. Configuring pod rollout strategy Copy linkLink copied to clipboard!
You can configure the collector pods rollout strategy so that the requests to the API server are minimized during upgrades and restarts.
Unlike in previous releases, API server kube-api caching is always enabled.
Prerequisites
- You have administrator permissions.
-
You installed the OpenShift CLI (
oc). - You installed and configured Red Hat OpenShift Logging Operator.
-
You have created a
ClusterLogForwardercustom resource (CR).
Procedure
Update the
ClusterLogForwarderCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where <value> can be a number or percentage. If you do not specify a value, the default value of
100%is used for themaxUnavailablefield.Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Network policies to override restrictive network in a cluster Copy linkLink copied to clipboard!
Red Hat OpenShift Logging Operator optionally provides permissive NetworkPolicy resources to override any restrictive network policies present in an OpenShift Container Platform cluster.
The NetworkPolicy resource ensures that all ingress and egress traffic is allowed even when the following restrictions are in place:
-
Restrictive default
NetworkPoliciesresource has been defined in the cluster. -
AdminNetworkPolicyconfiguration that limits pod communications is applied. - Namespace-level network restrictions are defined.
Red Hat OpenShift Logging Operator provides permissive NetworkPolicy resources for the collector and the LogFileMetricExporter.
2.5.1. Collector network policy Copy linkLink copied to clipboard!
Red Hat OpenShift Logging Operator creates a NetworkPolicy resource for the collector pods when a ClusterLogForwarder resource is deployed if you specify a network policy rule set in the ClusterLogForwarder custom resource (CR). NetworkPolicy resources are automatically created, updated, and removed along with the component deployment lifecycle.
You can specify the network policy rule set by defining the networkPolicy.ruleSet field in the collector specification. The collector supports the following network policy rule set types:
AllowAllIngressEgress- Allows all ingress and egress traffic.
RestrictIngressEgress- Restricts traffic to specific ports based on the configured inputs, outputs, and the metrics port.
If you do not define a spec.collector.networkPolicy field, the Operator will not create a NetworkPolicy resource for the collector. Without a NetworkPolicy resource, logging components might not function if a cluster-wide AdminNetworkPolicy restricts traffic or other restrictive network policies are in place.
2.5.2. Configuring network policy rule set for a collector Copy linkLink copied to clipboard!
Define a network policy rule set so that Red Hat OpenShift Logging Operator creates a NetworkPolicy resource for the collector pods when the ClusterLogForwarder resource is deployed. The NetworkPolicy resource overrides network restrictions to ensure that the permitted ingress and egress traffic for the collector pods is allowed to flow.
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
-
You have created a
ClusterLogForwardercustom resource (CR).
Procedure
Update the
ClusterLogForwarderCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where the value for the
ruleSetfield can be eitherRestrictIngressEgressorAllowAllIngressEgress.Apply the configuration by running the following command:
oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.3. LogFileMetricExporter network policy Copy linkLink copied to clipboard!
Red Hat OpenShift Logging Operator creates a NetworkPolicy resource for the metric exporter pods when a LogFileMetricExporter resource is deployed if you specify a network policy rule set in the LogFileMetricExporter custom resource (CR). NetworkPolicy resources are automatically created, updated, and removed along with the component deployment lifecycle.
The LogFileMetricExporter resource supports the following network policy rule set types:
AllowIngressMetrics- Allows ingress traffic only on the metrics port, denies all egress traffic.
AllowAllIngressEgress- Allows all ingress and egress traffic.
If you do not define a spec.networkPolicy field, the Operator will not create a NetworkPolicy resource for the LogFileMetricExporter. Without a NetworkPolicy resource, logging components might not function if a cluster-wide AdminNetworkPolicy restricts traffic or other restrictive network policies are in place.
2.5.4. Configuring network policy rule set for LogFileMetricExporter Copy linkLink copied to clipboard!
Define a network policy rule set so that Red Hat OpenShift Logging Operator creates a NetworkPolicy resource for the LogFileMetricExporter pods when the LogFileMetricExporter resource is deployed. The NetworkPolicy resource overrides network restrictions to ensure that the permitted ingress and egress traffic for the LogFileMetricExporter pods is allowed to flow.
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
-
You have created a
LogFileMetricExporterCR.
Procedure
Update the
LogFileMetricExporterCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where the value for the
ruleSetfield can be eitherAllowAllIngressEgressorAllowIngressMetrics.Apply the configuration by running the following command:
oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.5. Creating an AdminNetworkPolicy rule for resources managed by Red Hat OpenShift Logging Operator Copy linkLink copied to clipboard!
To ensure that the pods running resources managed by Red Hat OpenShift Logging Operator can communicate when an AdminNetworkPolicy configuration is blocking traffic, create an AdminNetworkPolicy rule that delegates to a NetworkPolicy definition.
OpenShift Container Platform network policy precedence:
-
AdminNetworkPolicy: Cluster-admin controlled, highest priority. -
BaselineAdminNetworkPolicy: Default fallback rules. -
NetworkPolicy: Namespace-level policies. This is where collector policies reside.
Prerequisites
- You have administrator permissions.
-
You installed the OpenShift CLI (
oc).
Procedure
Create an
AdminNetworkPolicyrule:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To apply this network policy to all the resources in a namespace, remove
app.kubernetes.io/name,app.kubernetes.io/instance, andapp.kubernetes.io/componentfields. To apply the admin network policy to specific resources, define the fields as follows:-
Replace
<name>with the name of the collector or theLogFileMetricExporterresource. -
Replace
<instance>with the instance name of the collector or theLogFileMetricExporterresource. -
Replace
<component>with the valuecollectororlogfilesmetricexporter.
-
Replace
Apply the
AdminNetworkPolicyrule by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow