Chapter 4. About Logging
As a cluster administrator, you can deploy logging on an Red Hat OpenShift Service on AWS cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. You can forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. You can also visualize your log data in the Red Hat OpenShift Service on AWS web console, or the Kibana web console, depending on your deployed log storage solution.
The Kibana web console is now deprecated is planned to be removed in a future logging release.
Red Hat OpenShift Service on AWS cluster administrators can deploy logging by using Operators. For information, see xref :../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing logging].
The Operators are responsible for deploying, upgrading, and maintaining logging. After the Operators are installed, you can create a ClusterLogging
custom resource (CR) to schedule logging pods and other resources necessary to support logging. You can also create a ClusterLogForwarder
CR to specify which logs are collected, how they are transformed, and where they are forwarded to.
Because the internal Red Hat OpenShift Service on AWS Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref :../../observability/logging/log_storage/logging-config-es-store.adoc#cluster-logging-elasticsearch-audit_logging-config-es-store[Forward audit logs to the log store].
4.1. Logging architecture
The major components of the logging are:
- Collector
The collector is a daemonset that deploys pods to each Red Hat OpenShift Service on AWS node. It collects log data from each node, transforms the data, and forwards it to configured outputs. You can use the Vector collector or the legacy Fluentd collector.
NoteFluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.
- Log store
The log store stores log data for analysis and is the default output for the log forwarder. You can use the default LokiStack log store, the legacy Elasticsearch log store, or forward logs to additional external log stores.
NoteThe Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators.
- Visualization
You can use a UI component to view a visual representation of your log data. The UI provides a graphical interface to search, query, and view stored logs. The Red Hat OpenShift Service on AWS web console UI is provided by enabling the Red Hat OpenShift Service on AWS console plugin.
NoteThe Kibana web console is now deprecated is planned to be removed in a future logging release.
Logging collects container logs and node logs. These are categorized into types:
- Application logs
- Container logs generated by user applications running in the cluster, except infrastructure container applications.
- Infrastructure logs
-
Container logs generated by infrastructure namespaces:
openshift*
,kube*
, ordefault
, as well as journald messages from nodes. - Audit logs
-
Logs generated by auditd, the node audit system, which are stored in the /var/log/audit/audit.log file, and logs from the
auditd
,kube-apiserver
,openshift-apiserver
services, as well as theovn
project if enabled.
Additional resources
- xref :../../observability/logging/log_visualization/log-visualization-ocp-console.adoc#log-visualization-ocp-console[Log visualization with the web console]
4.2. About deploying logging
Administrators can deploy the logging by using the Red Hat OpenShift Service on AWS web console or the OpenShift CLI (oc
) to install the logging Operators. The Operators are responsible for deploying, upgrading, and maintaining the logging.
Administrators and application developers can view the logs of the projects for which they have view access.
4.2.1. Logging custom resources
You can configure your logging deployment with custom resource (CR) YAML files implemented by each Operator.
Red Hat OpenShift Logging Operator:
-
ClusterLogging
(CL) - After the Operators are installed, you create aClusterLogging
custom resource (CR) to schedule logging pods and other resources necessary to support the logging. TheClusterLogging
CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The Red Hat OpenShift Logging Operator watches theClusterLogging
CR and adjusts the logging deployment accordingly. -
ClusterLogForwarder
(CLF) - Generates collector configuration to forward logs per user configuration.
Loki Operator:
-
LokiStack
- Controls the Loki cluster as log store and the web proxy with Red Hat OpenShift Service on AWS authentication integration to enforce multi-tenancy.
OpenShift Elasticsearch Operator:
These CRs are generated and managed by the OpenShift Elasticsearch Operator. Manual changes cannot be made without being overwritten by the Operator.
-
ElasticSearch
- Configure and deploy an Elasticsearch instance as the default log store. -
Kibana
- Configure and deploy Kibana instance to search, query and view logs.
4.2.2. Logging requirements
Hosting your own logging stack requires a large amount of compute resources and storage, which might be dependent on your cloud service quota. The compute resource requirements can start at 48 GB or more, while the storage requirement can be as large as 1600 GB or more. The logging stack runs on your worker nodes, which reduces your available workload resource. With these considerations, hosting your own logging stack increases your cluster operating costs.
For information, see xref :../../observability/logging/log_collection_forwarding/log-forwarding.adoc#about-log-collection_log-forwarding[About log collection and forwarding].
4.2.3. About JSON Red Hat OpenShift Service on AWS Logging
You can use JSON logging to configure the Log Forwarding API to parse JSON strings into a structured object. You can perform the following tasks:
- Parse JSON logs
- Configure JSON log data for Elasticsearch
- Forward JSON logs to the Elasticsearch log store
4.2.4. About collecting and storing Kubernetes events
The Red Hat OpenShift Service on AWS Event Router is a pod that watches Kubernetes events and logs them for collection by Red Hat OpenShift Service on AWS Logging. You must manually deploy the Event Router.
For information, see xref :../../observability/logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[About collecting and storing Kubernetes events].
4.2.5. About troubleshooting Red Hat OpenShift Service on AWS Logging
You can troubleshoot the logging issues by performing the following tasks:
- Viewing logging status
- Viewing the status of the log store
- Understanding logging alerts
- Collecting logging data for Red Hat Support
- Troubleshooting for critical alerts
4.2.6. About exporting fields
The logging system exports fields. Exported fields are present in the log records and are available for searching from Elasticsearch and Kibana.
For information, see xref :../../observability/logging/cluster-logging-exported-fields.adoc#cluster-logging-exported-fields[About exporting fields].
4.2.7. About event routing
The Event Router is a pod that watches Red Hat OpenShift Service on AWS events so they can be collected by logging. The Event Router collects events from all projects and writes them to STDOUT
. Fluentd collects those events and forwards them into the Red Hat OpenShift Service on AWS Elasticsearch instance. Elasticsearch indexes the events to the infra
index.
You must manually deploy the Event Router.
For information, see xref :../../observability/logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[Collecting and storing Kubernetes events].