Este conteúdo não está disponível no idioma selecionado.
Chapter 4. Quick start
OpenShift Logging supports two data models:
- ViaQ (General Availability)
- OpenTelemetry (Technology Preview)
			You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack.
		
In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry.
4.1. Quick start with ViaQ
To use the default ViaQ data model, follow these steps:
Prerequisites
- 
						You have access to an OpenShift Container Platform cluster with cluster-adminpermissions.
- 
						You installed the OpenShift CLI (oc).
- You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Procedure
- 
						Install the Red Hat OpenShift Logging Operator,Loki Operator, andCluster Observability Operator (COO)from OperatorHub.
- Create a - LokiStackcustom resource (CR) in the- openshift-loggingnamespace:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- Ensure that the - logging-loki-s3secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration.
- Create a service account for the collector: - oc create sa collector -n openshift-logging - $ oc create sa collector -n openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Allow the collector’s service account to write data to the - LokiStackCR:- oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging - $ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- The - ClusterRoleresource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.
- To collect logs, use the service account of the collector by running the following commands: - oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging - $ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging - $ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging - $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your - ClusterLogForwarderconfiguration to include them. Assign roles based on the specific log types required for your environment.
- Create a - UIPluginCR to enable the Log section in the Observe tab:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a - ClusterLogForwarderCR to configure log forwarding:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- The - dataModelfield is optional and left unset (- dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying- dataModel: ViaQensures the configuration remains compatible if the default changes.
Verification
- Verify that logs are visible in the Log section of the Observe tab in the OpenShift Container Platform web console.
4.2. Quick start with OpenTelemetry
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps:
Prerequisites
- Cluster administrator permissions
Procedure
- Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub.
- Create a - LokiStackcustom resource (CR) in the- openshift-loggingnamespace:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- Ensure that the - logging-loki-s3secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration".
- Create a service account for the collector: - oc create sa collector -n openshift-logging - $ oc create sa collector -n openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Allow the collector’s service account to write data to the - LokiStackCR:- oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector - $ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- The - ClusterRoleresource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.
- Allow the collector’s service account to collect logs: - oc project openshift-logging - $ oc project openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - oc adm policy add-cluster-role-to-user collect-application-logs -z collector - $ oc adm policy add-cluster-role-to-user collect-application-logs -z collector- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - oc adm policy add-cluster-role-to-user collect-audit-logs -z collector - $ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector - $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your - ClusterLogForwarderconfiguration to include them. Assign roles based on the specific log types required for your environment.
- Create a - UIPluginCR to enable the Log section in the Observe tab:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a - ClusterLogForwarderCR to configure log forwarding:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- You cannot use - lokiStack.labelKeyswhen- dataModelis- Otel. To achieve similar functionality when- dataModelis- Otel, refer to "Configuring LokiStack for OTLP data ingestion".
Verification
- 
						Verify that OTLP is functioning correctly by going to Observe OpenShift Logging LokiStack Writes in the OpenShift web console, and checking Distributor - Structured Metadata.