This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 10. Log storage
10.1. About log storage
				You can use an internal Loki or Elasticsearch log store on your cluster for storing logs, or you can use a ClusterLogForwarder custom resource (CR) to forward logs to an external store.
			
10.1.1. Log storage types
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as an alternative to Elasticsearch as a log store for the logging.
Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly.
10.1.1.1. About the Elasticsearch log store
The logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system.
						Elasticsearch organizes the log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards, which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called replicas, which Elasticsearch also spreads across the Elasticsearch nodes. The ClusterLogging custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the ClusterLogging CR.
					
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
						The Red Hat OpenShift Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume. You can use a ClusterLogging custom resource (CR) to increase the number of Elasticsearch nodes, as needed. See the Elasticsearch documentation for considerations involved in configuring storage.
					
A highly-available Elasticsearch environment requires at least three Elasticsearch nodes, each on a different host.
Role-based access control (RBAC) applied on the Elasticsearch indices enables the controlled access of the logs to the developers. Administrators can access all logs and developers can access only the logs in their projects.
10.1.2. Querying log stores
You can query Loki by using the LogQL log query language.
10.2. Installing log storage
				You can use the OpenShift CLI (oc) or the OpenShift Container Platform web console to deploy a log store on your OpenShift Container Platform cluster.
			
The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
10.2.1. Deploying a Loki log store
					You can use the Loki Operator to deploy an internal Loki log store on your OpenShift Container Platform cluster. After install the Loki Operator, you must configure Loki object storage by creating a secret, and create a LokiStack custom resource (CR).
				
10.2.1.1. Deployment Sizing
						Sizing for Loki follows the format of N<x>.<size> where the value <N> is number of instances and <size> specifies performance capabilities.
					
1x.extra-small is for demo purposes only, and is not supported.
| 1x.extra-small | 1x.small | 1x.medium | |
|---|---|---|---|
| Data transfer | Demo use only. | 500GB/day | 2TB/day | 
| Queries per second (QPS) | Demo use only. | 25-50 QPS at 200ms | 25-75 QPS at 200ms | 
| Replication factor | None | 2 | 3 | 
| Total CPU requests | 5 vCPUs | 36 vCPUs | 54 vCPUs | 
| Total Memory requests | 7.5Gi | 63Gi | 139Gi | 
| Total Disk requests | 150Gi | 300Gi | 450Gi | 
10.2.1.1.1. Supported API Custom Resource Definitions
LokiStack development is ongoing, not all APIs are supported currently supported.
| CustomResourceDefinition (CRD) | ApiVersion | Support state | 
|---|---|---|
| LokiStack | lokistack.loki.grafana.com/v1 | Supported in 5.5 | 
| RulerConfig | rulerconfig.loki.grafana/v1beta1 | Technology Preview | 
| AlertingRule | alertingrule.loki.grafana/v1beta1 | Technology Preview | 
| RecordingRule | recordingrule.loki.grafana/v1beta1 | Technology Preview | 
								Usage of RulerConfig, AlertingRule and RecordingRule custom resource definitions (CRDs). is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
							
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
10.2.1.2. Installing the Loki Operator by using the OpenShift Container Platform web console
To install and configure logging on your OpenShift Container Platform cluster, additional Operators must be installed. This can be done from the Operator Hub within the web console.
OpenShift Container Platform Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator’s logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs.
Prerequisites
- You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation).
- You have administrator permissions.
- You have access to the OpenShift Container Platform web console.
Procedure
- 
								In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub. 
- Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install. Important- The Community Loki Operator is not supported by Red Hat. 
- Select stable or stable-x.y as the Update channel. Note- The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where - x.yrepresents the major and minor version of logging you have installed. For example, stable-5.7.- The Loki Operator must be deployed to the global operator group namespace - openshift-operators-redhat, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you.
- Select Enable operator-recommended cluster monitoring on this namespace. - This option sets the - openshift.io/cluster-monitoring: "true"label in the- Namespaceobject. You must select this option to ensure that cluster monitoring scrapes the- openshift-operators-redhatnamespace.
- For Update approval select Automatic, then click Install. - If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates. 
Verification
- 
								Go to Operators Installed Operators. 
- Make sure the openshift-logging project is selected.
- In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date.
							An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page.
						
10.2.1.3. Creating a secret for Loki object storage by using the web console
To configure Loki object storage, you must create a secret. You can create a secret by using the OpenShift Container Platform web console.
Prerequisites
- You have administrator permissions.
- You have access to the OpenShift Container Platform web console.
- You installed the Loki Operator.
Procedure
- 
								Go to Workloads Secrets in the Administrator perspective of the OpenShift Container Platform web console. 
- From the Create drop-down list, select From YAML.
- Create a secret that uses the - access_key_idand- access_key_secretfields to specify your credentials and the- bucketnames,- endpoint, and- regionfields to define the object storage location. AWS is used in the following example:- Example - Secretobject- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.1.4. Creating a LokiStack custom resource by using the web console
						You can create a LokiStack custom resource (CR) by using the OpenShift Container Platform web console.
					
Prerequisites
- You have administrator permissions.
- You have access to the OpenShift Container Platform web console.
- You installed the Loki Operator.
Procedure
- 
								Go to the Operators Installed Operators page. Click the All instances tab. 
- From the Create new drop-down list, select LokiStack.
- Select YAML view, and then use the following template to create a - LokiStackCR:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Use the namelogging-loki.
- 2
- Select your Loki deployment size.
- 3
- Specify the secret used for your log storage.
- 4
- Specify the corresponding storage type.
- 5
- Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using theoc get storageclassescommand.
 
10.2.1.5. Installing Loki Operator by using the CLI
To install and configure logging on your OpenShift Container Platform cluster, additional Operators must be installed. This can be done from the OpenShift Container Platform CLI.
OpenShift Container Platform Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator’s logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs.
Prerequisites
- You have administrator permissions.
- 
								You installed the OpenShift CLI (oc).
- You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Procedure
- Create a - Subscriptionobject:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1 2
- You must specify theopenshift-operators-redhatnamespace.
- 3
- Specifystable, orstable-5.<y>as the channel.
- 4
- Specifyredhat-operators. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of theCatalogSourceobject you created when you configured the Operator Lifecycle Manager (OLM).
 
- Apply the - Subscriptionobject:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.1.6. Creating a secret for Loki object storage by using the CLI
						To configure Loki object storage, you must create a secret. You can do this by using the OpenShift CLI (oc).
					
Prerequisites
- You have administrator permissions.
- You installed the Loki Operator.
- 
								You installed the OpenShift CLI (oc).
Procedure
- Create a secret in the directory that contains your certificate and key files by running the following command: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Use generic or opaque secrets for best results.
Verification
- Verify that a secret was created by running the following command: - oc get secrets - $ oc get secrets- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.1.7. Creating a LokiStack custom resource by using the CLI
						You can create a LokiStack custom resource (CR) by using the OpenShift CLI (oc).
					
Prerequisites
- You have administrator permissions.
- You installed the Loki Operator.
- 
								You installed the OpenShift CLI (oc).
Procedure
- Create a - LokiStackCR:- Example - LokiStackCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Supported size options for production instances of Loki are1x.smalland1x.medium.
- 2
- Enter the name of your log store secret.
- 3
- Enter the type of your log store secret.
- 4
- Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by usingoc get storageclasses.
 
- Apply the - LokiStackCR:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
- Verify the installation by listing the pods in the - openshift-loggingproject by running the following command and observing the output:- oc get pods -n openshift-logging - $ oc get pods -n openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Confirm that you see several pods for components of the logging, similar to the following list: - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.2. Loki object storage
The Loki Operator supports AWS S3, as well as other S3 compatible object stores such as Minio and OpenShift Data Foundation. Azure, GCS, and Swift are also supported.
					The recommended nomenclature for Loki storage is logging-loki-<your_storage_provider>.
				
					The following table shows the type values within the LokiStack custom resource (CR) for each storage provider. For more information, see the section on your storage provider.
				
| Storage provider | Secret typevalue | 
|---|---|
| AWS | s3 | 
| Azure | azure | 
| Google Cloud | gcs | 
| Minio | s3 | 
| OpenShift Data Foundation | s3 | 
| Swift | swift | 
10.2.2.1. AWS storage
Prerequisites
- You installed the Loki Operator.
- 
								You installed the OpenShift CLI (oc).
- You created a bucket on AWS.
- You created an AWS IAM Policy and IAM User.
Procedure
- Create an object storage secret with the name - logging-loki-awsby running the following command:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.2.2. Azure storage
Prerequisites
- You installed the Loki Operator.
- 
								You installed the OpenShift CLI (oc).
- You created a bucket on Azure.
Procedure
- Create an object storage secret with the name - logging-loki-azureby running the following command:- oc create secret generic logging-loki-azure \ --from-literal=container="<azure_container_name>" \ --from-literal=environment="<azure_environment>" \ --from-literal=account_name="<azure_account_name>" \ --from-literal=account_key="<azure_account_key>" - $ oc create secret generic logging-loki-azure \ --from-literal=container="<azure_container_name>" \ --from-literal=environment="<azure_environment>" \- 1 - --from-literal=account_name="<azure_account_name>" \ --from-literal=account_key="<azure_account_key>"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Supported environment values areAzureGlobal,AzureChinaCloud,AzureGermanCloud, orAzureUSGovernment.
 
10.2.2.3. Google Cloud Platform storage
Prerequisites
- You installed the Loki Operator.
- 
								You installed the OpenShift CLI (oc).
- You created a project on Google Cloud Platform (GCP).
- You created a bucket in the same project.
- You created a service account in the same project for GCP authentication.
Procedure
- 
								Copy the service account credentials received from GCP into a file called key.json.
- Create an object storage secret with the name - logging-loki-gcsby running the following command:- oc create secret generic logging-loki-gcs \ --from-literal=bucketname="<bucket_name>" \ --from-file=key.json="<path/to/key.json>" - $ oc create secret generic logging-loki-gcs \ --from-literal=bucketname="<bucket_name>" \ --from-file=key.json="<path/to/key.json>"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.2.4. Minio storage
Prerequisites
Procedure
- Create an object storage secret with the name - logging-loki-minioby running the following command:- oc create secret generic logging-loki-minio \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<minio_bucket_endpoint>" \ --from-literal=access_key_id="<minio_access_key_id>" \ --from-literal=access_key_secret="<minio_access_key_secret>" - $ oc create secret generic logging-loki-minio \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<minio_bucket_endpoint>" \ --from-literal=access_key_id="<minio_access_key_id>" \ --from-literal=access_key_secret="<minio_access_key_secret>"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.2.5. OpenShift Data Foundation storage
Prerequisites
- You installed the Loki Operator.
- 
								You installed the OpenShift CLI (oc).
- You deployed OpenShift Data Foundation.
- You configured your OpenShift Data Foundation cluster for object storage.
Procedure
- Create an - ObjectBucketClaimcustom resource in the- openshift-loggingnamespace:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Get bucket properties from the associated - ConfigMapobject by running the following command:- BUCKET_HOST=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')- BUCKET_HOST=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Get bucket access key from the associated secret by running the following command: - ACCESS_KEY_ID=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)- ACCESS_KEY_ID=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an object storage secret with the name - logging-loki-odfby running the following command:- oc create -n openshift-logging secret generic logging-loki-odf \ --from-literal=access_key_id="<access_key_id>" \ --from-literal=access_key_secret="<secret_access_key>" \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="https://<bucket_host>:<bucket_port>" - $ oc create -n openshift-logging secret generic logging-loki-odf \ --from-literal=access_key_id="<access_key_id>" \ --from-literal=access_key_secret="<secret_access_key>" \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="https://<bucket_host>:<bucket_port>"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.2.6. Swift storage
Prerequisites
- You installed the Loki Operator.
- 
								You installed the OpenShift CLI (oc).
- You created a bucket on Swift.
Procedure
- Create an object storage secret with the name - logging-loki-swiftby running the following command:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- You can optionally provide project-specific data, region, or both by running the following command: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.3. Deploying an Elasticsearch log store
You can use the OpenShift Elasticsearch Operator to deploy an internal Elasticsearch log store on your OpenShift Container Platform cluster.
The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
10.2.3.1. Storage considerations for Elasticsearch
A persistent volume is required for each Elasticsearch deployment configuration. On OpenShift Container Platform this is achieved using persistent volume claims (PVCs).
							If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes.
						
The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name.
Fluentd ships any logs from systemd journal and /var/log/containers/*.log to Elasticsearch.
Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity.
By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED.
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts.
10.2.3.2. Installing the OpenShift Elasticsearch Operator by using the web console
The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging.
Prerequisites
- Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16GB of memory for both memory requests and limits, unless you specify otherwise in the - ClusterLoggingcustom resource.- The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory, up to a maximum of 64GB for each Elasticsearch node. - Elasticsearch nodes can operate with a lower memory setting, though this is not recommended for production environments. 
- Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note- If you use a local volume for persistent storage, do not use a raw block volume, which is described with - volumeMode: blockin the- LocalVolumeobject. Elasticsearch cannot use raw block volumes.
Procedure
- 
								In the OpenShift Container Platform web console, click Operators OperatorHub. 
- Click OpenShift Elasticsearch Operator from the list of available Operators, and click Install.
- Ensure that the All namespaces on the cluster is selected under Installation mode.
- Ensure that openshift-operators-redhat is selected under Installed Namespace. - You must specify the - openshift-operators-redhatnamespace. The- openshift-operatorsnamespace might contain Community Operators, which are untrusted and could publish a metric with the same name as OpenShift Container Platform metric, which would cause conflicts.
- Select Enable operator recommended cluster monitoring on this namespace. - This option sets the - openshift.io/cluster-monitoring: "true"label in the- Namespaceobject. You must select this option to ensure that cluster monitoring scrapes the- openshift-operators-redhatnamespace.
- Select stable-5.x as the Update channel.
- Select an Update approval strategy: - The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
- The Manual strategy requires a user with appropriate credentials to approve the Operator update.
 
- Click Install.
Verification
- 
								Verify that the OpenShift Elasticsearch Operator installed by switching to the Operators Installed Operators page. 
- Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded.
10.2.3.3. Installing the OpenShift Elasticsearch Operator by using the CLI
						You can use the OpenShift CLI (oc) to install the OpenShift Elasticsearch Operator.
					
Prerequisites
- Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note- If you use a local volume for persistent storage, do not use a raw block volume, which is described with - volumeMode: blockin the- LocalVolumeobject. Elasticsearch cannot use raw block volumes.- Elasticsearch is a memory-intensive application. By default, OpenShift Container Platform installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three OpenShift Container Platform nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. 
- You have administrator permissions.
- 
								You have installed the OpenShift CLI (oc).
Procedure
- Create a - Namespaceobject as a YAML file:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- You must specify theopenshift-operators-redhatnamespace. To prevent possible conflicts with metrics, configure the Prometheus Cluster Monitoring stack to scrape metrics from theopenshift-operators-redhatnamespace and not theopenshift-operatorsnamespace. Theopenshift-operatorsnamespace might contain community Operators, which are untrusted and could publish a metric with the same name as metric, which would cause conflicts.
- 2
- String. You must specify this label as shown to ensure that cluster monitoring scrapes theopenshift-operators-redhatnamespace.
 
- Apply the - Namespaceobject by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an - OperatorGroupobject as a YAML file:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- You must specify theopenshift-operators-redhatnamespace.
 
- Apply the - OperatorGroupobject by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a - Subscriptionobject to subscribe the namespace to the OpenShift Elasticsearch Operator:- Example Subscription - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- You must specify theopenshift-operators-redhatnamespace.
- 2
- Specifystable, orstable-x.yas the channel. See the following note.
- 3
- Automaticallows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.- Manualrequires a user with appropriate credentials to approve the Operator update.
- 4
- Specifyredhat-operators. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of theCatalogSourceobject created when you configured the Operator Lifecycle Manager (OLM).
 Note- Specifying - stableinstalls the current version of the latest stable release. Using- stablewith- installPlanApproval: "Automatic"automatically upgrades your Operators to the latest stable major and minor release.- Specifying - stable-x.yinstalls the current minor version of a specific major release. Using- stable-x.ywith- installPlanApproval: "Automatic"automatically upgrades your Operators to the latest stable minor release within the major release.
- Apply the subscription by running the following command: - oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The OpenShift Elasticsearch Operator is installed to the - openshift-operators-redhatnamespace and copied to each project in the cluster.
Verification
- Run the following command: - oc get csv -n --all-namespaces - $ oc get csv -n --all-namespaces- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Observe the output and confirm that pods for the OpenShift Elasticsearch Operator exist in each namespace - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.2.4. Configuring log storage
					You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR).
				
Prerequisites
- You have administrator permissions.
- 
							You have installed the OpenShift CLI (oc).
- You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch.
- 
							You have created a ClusterLoggingCR.
The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
Procedure
- Modify the - ClusterLoggingCR- logStorespec:- ClusterLoggingCR example- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the log store type. This can be eitherlokistackorelasticsearch.
- 2
- Optional configuration options for the Elasticsearch log store.
- 3
- Specify the redundancy type. This value can beZeroRedundancy,SingleRedundancy,MultipleRedundancy, orFullRedundancy.
- 4
- Optional configuration options for LokiStack.
 - Example - ClusterLoggingCR to specify LokiStack as the log store- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Apply the - ClusterLoggingCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.3. Configuring the LokiStack log store
In logging documentation, LokiStack refers to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store.
10.3.1. Creating a new group for the cluster-admin user role
						Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120). For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it.
					
					Use the following procedure to create a new group for users with cluster-admin permissions.
				
Procedure
- Enter the following command to create a new group: - oc adm groups new cluster-admin - $ oc adm groups new cluster-admin- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Enter the following command to add the desired user to the - cluster-admingroup:- oc adm groups add-users cluster-admin <username> - $ oc adm groups add-users cluster-admin <username>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Enter the following command to add - cluster-adminuser role to the group:- oc adm policy add-cluster-role-to-group cluster-admin cluster-admin - $ oc adm policy add-cluster-role-to-group cluster-admin cluster-admin- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.3.2. Enabling stream-based retention with Loki
With Logging version 5.6 and higher, you can configure retention policies based on log streams. Rules for these may be set globally, per tenant, or both. If you configure both, tenant rules apply before global rules.
- To enable stream-based retention, create a - LokiStackcustom resource (CR):- Example global stream-based retention - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage.
- 2
- Retention is enabled in the cluster when this block is added to the CR.
- 3
- Contains the LogQL query used to define the log stream.
 - Example per-tenant stream-based retention - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Sets retention policy by tenant. Valid tenant types areapplication,audit, andinfrastructure.
- 2
- Contains the LogQL query used to define the log stream.
 
- Apply the - LokiStackCR:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage.
10.3.3. Troubleshooting Loki rate limit errors
					If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors.
				
These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.
					In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR).
				
						The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers.
					
Conditions
- The Log Forwarder API is configured to forward logs to Loki.
- Your system sends a block of messages that is larger than 2 MB to Loki. For example: - "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}- "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- After you enter - oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages:- 429 Too Many Requests Ingestion rate limit exceeded - 429 Too Many Requests Ingestion rate limit exceeded- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example Vector error message - 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true- 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example Fluentd error message - 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" - 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The error is also visible on the receiving end. For example, in the LokiStack ingester pod: - Example Loki ingester error message - level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream - level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Procedure
- Update the - ingestionBurstSizeand- ingestionRatefields in the- LokiStackCR:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- TheingestionBurstSizefield defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than theingestionBurstSizevalue are not permitted.
- 2
- TheingestionRatefield is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.
 
10.4. Configuring the Elasticsearch log store
You can use Elasticsearch 6 to store and organize log data.
You can make modifications to your log store, including:
- Storage for your Elasticsearch cluster
- Shard replication across data nodes in the cluster, from full replication to no replication
- External access to Elasticsearch data
10.4.1. Configuring log storage
					You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR).
				
Prerequisites
- You have administrator permissions.
- 
							You have installed the OpenShift CLI (oc).
- You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch.
- 
							You have created a ClusterLoggingCR.
The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
Procedure
- Modify the - ClusterLoggingCR- logStorespec:- ClusterLoggingCR example- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the log store type. This can be eitherlokistackorelasticsearch.
- 2
- Optional configuration options for the Elasticsearch log store.
- 3
- Specify the redundancy type. This value can beZeroRedundancy,SingleRedundancy,MultipleRedundancy, orFullRedundancy.
- 4
- Optional configuration options for LokiStack.
 - Example - ClusterLoggingCR to specify LokiStack as the log store- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Apply the - ClusterLoggingCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.4.2. Forwarding audit logs to the log store
By default, OpenShift Logging does not store audit logs in the internal OpenShift Container Platform Elasticsearch log store. You can send audit logs to this log store so, for example, you can view them in Kibana.
To send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API.
The internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs. Verify that the system to which you forward audit logs complies with your organizational and governmental regulations and is properly secured. Logging does not comply with those regulations.
Procedure
To use the Log Forward API to forward audit logs to the internal Elasticsearch instance:
- Create or edit a YAML file that defines the - ClusterLogForwarderCR object:- Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance.
 Note- You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost. 
- If you have an existing - ClusterLogForwarderCR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance.
 
 
10.4.3. Configuring log retention time
You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs.
					To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices.
				
Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions:
- 
							The index is older than the rollover.maxAgevalue in theElasticsearchCR.
- The index size is greater than 40 GB × the number of primary shards.
- The index doc count is greater than 40960 KB × the number of primary shards.
Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default.
Prerequisites
- The Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator must be installed.
Procedure
To configure the log retention time:
- Edit the - ClusterLoggingCR to add or modify the- retentionPolicyparameter:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example,1dfor one day. Logs older than themaxAgeare deleted. By default, logs are retained for seven days.
 
- You can verify the settings in the - Elasticsearchcustom resource (CR).- For example, the Red Hat OpenShift Logging Operator updated the following - ElasticsearchCR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- For each log source, the retention policy indicates when to delete and roll over logs for that source.
- 2
- When OpenShift Container Platform deletes the rolled-over indices. This setting is themaxAgeyou set in theClusterLoggingCR.
- 3
- The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from themaxAgeyou set in theClusterLoggingCR.
- 4
- When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed.
 Note- Modifying the - ElasticsearchCR is not supported. All changes to the retention policies must be made in the- ClusterLoggingCR.- The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the - pollInterval.- oc get cronjob - $ oc get cronjob- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s - NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.4.4. Configuring CPU and memory requests for the log store
Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment.
In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy.
Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
Procedure
- Edit the - ClusterLoggingcustom resource (CR) in the- openshift-loggingproject:- oc edit ClusterLogging instance - $ oc edit ClusterLogging instance- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are16Gifor the memory request and1for the CPU request.
- 2
- The maximum amount of resources a pod can use.
- 3
- The minimum resources required to schedule a pod.
- 4
- Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that are sufficient for most deployments. The default values are256Mifor the memory request and100mfor the CPU request.
 
					When adjusting the amount of Elasticsearch memory, the same value should be used for both requests and limits.
				
For example:
					Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available.
				
10.4.5. Configuring replication policy for the log store
You can define how Elasticsearch shards are replicated across data nodes in the cluster.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
Procedure
- Edit the - ClusterLoggingcustom resource (CR) in the- openshift-loggingproject:- oc edit clusterlogging instance - $ oc edit clusterlogging instance- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify a redundancy policy for the shards. The change is applied upon saving the changes.- FullRedundancy. Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance.
- MultipleRedundancy. Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance.
- SingleRedundancy. Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node.
- ZeroRedundancy. Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy.
 
 
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
10.4.6. Scaling down Elasticsearch pods
Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation.
					If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green, you can scale down by another pod.
				
						If your Elasticsearch cluster is set to ZeroRedundancy, you should not scale down your Elasticsearch pods.
					
10.4.7. Configuring persistent storage for the log store
Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance.
Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
Procedure
- Edit the - ClusterLoggingCR to specify that each data node in the cluster is bound to a Persistent Volume Claim.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage.
						If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes.
					
10.4.8. Configuring the log store for emptyDir storage
You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod’s data is lost upon restart.
When using emptyDir, if log storage is restarted or redeployed, you will lose data.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
Procedure
- Edit the - ClusterLoggingCR to specify emptyDir:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.4.9. Performing an Elasticsearch rolling cluster restart
					Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations.
				
Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
Procedure
To perform a rolling cluster restart:
- Change to the - openshift-loggingproject:- oc project openshift-logging - $ oc project openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Get the names of the Elasticsearch pods: - oc get pods -l component=elasticsearch - $ oc get pods -l component=elasticsearch- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Scale down the collector pods so they stop sending new logs to Elasticsearch: - oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "false"}}}}}'- $ oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "false"}}}}}'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down: - oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST - $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For example: - oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_flush/synced" -XPOST - $ oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_flush/synced" -XPOST- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - {"_shards":{"total":4,"successful":4,"failed":0},".security":{"total":2,"successful":2,"failed":0},".kibana_1":{"total":2,"successful":2,"failed":0}}- {"_shards":{"total":4,"successful":4,"failed":0},".security":{"total":2,"successful":2,"failed":0},".kibana_1":{"total":2,"successful":2,"failed":0}}- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool: - oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'- $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For example: - oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'- $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient":- {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient":- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- After the command is complete, for each deployment you have for an ES cluster: - By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes: - oc rollout resume deployment/<deployment-name> - $ oc rollout resume deployment/<deployment-name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For example: - oc rollout resume deployment/elasticsearch-cdm-0-1 - $ oc rollout resume deployment/elasticsearch-cdm-0-1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - deployment.extensions/elasticsearch-cdm-0-1 resumed - deployment.extensions/elasticsearch-cdm-0-1 resumed- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - A new pod is deployed. After the pod has a ready container, you can move on to the next deployment. - oc get pods -l component=elasticsearch- - $ oc get pods -l component=elasticsearch-- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h - NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- After the deployments are complete, reset the pod to disallow rollouts: - oc rollout pause deployment/<deployment-name> - $ oc rollout pause deployment/<deployment-name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For example: - oc rollout pause deployment/elasticsearch-cdm-0-1 - $ oc rollout pause deployment/elasticsearch-cdm-0-1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - deployment.extensions/elasticsearch-cdm-0-1 paused - deployment.extensions/elasticsearch-cdm-0-1 paused- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check that the Elasticsearch cluster is in a - greenor- yellowstate:- oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true - $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- If you performed a rollout on the Elasticsearch pod you used in the previous commands, the pod no longer exists and you need a new pod name here. - For example: - oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true - $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Make sure this parameter value isgreenoryellowbefore proceeding.
 
 
- If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod.
- After all the deployments for the cluster have been rolled out, re-enable shard balancing: - oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'- $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For example: - oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'- $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Scale up the collector pods so they send new logs to Elasticsearch. - oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "true"}}}}}'- $ oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "true"}}}}}'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
10.4.10. Exposing the log store service as a route
By default, the log store that is deployed with logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data.
Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains:
- 
							The Authorization: Bearer ${token}
- The Elasticsearch reencrypt route and an Elasticsearch API request.
Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands:
oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging
$ oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-loggingExample output
172.30.183.229
172.30.183.229oc get service elasticsearch -n openshift-logging
$ oc get service elasticsearch -n openshift-loggingExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
elasticsearch   ClusterIP   172.30.183.229   <none>        9200/TCP   22hYou can check the cluster IP address with a command similar to the following:
oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://172.30.183.229:9200/_cat/health"
$ oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://172.30.183.229:9200/_cat/health"Example output
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    29  100    29    0     0    108      0 --:--:-- --:--:-- --:--:--   108
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    29  100    29    0     0    108      0 --:--:-- --:--:-- --:--:--   108Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
- You must have access to the project to be able to access to the logs.
Procedure
To expose the log store externally:
- Change to the - openshift-loggingproject:- oc project openshift-logging - $ oc project openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Extract the CA certificate from the log store and write to the admin-ca file: - oc extract secret/elasticsearch --to=. --keys=admin-ca - $ oc extract secret/elasticsearch --to=. --keys=admin-ca- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - admin-ca - admin-ca- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the route for the log store service as a YAML file: - Create a YAML file with the following: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Add the log store CA certifcate or use the command in the next step. You do not have to set thespec.tls.key,spec.tls.certificate, andspec.tls.caCertificateparameters required by some reencrypt routes.
 
- Run the following command to add the log store CA certificate to the route YAML you created in the previous step: - cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml - $ cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the route: - oc create -f <file-name>.yaml - $ oc create -f <file-name>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - route.route.openshift.io/elasticsearch created - route.route.openshift.io/elasticsearch created- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Check that the Elasticsearch service is exposed: - Get the token of this service account to be used in the request: - token=$(oc whoami -t) - $ token=$(oc whoami -t)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the elasticsearch route you created as an environment variable. - routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`- $ routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route: - curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}"- curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The response appears similar to the following: - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
10.4.11. Removing unused components if you do not use the default Elasticsearch log store
As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster.
					In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources.
				
Prerequisites
- Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the - ClusterLogForwarderCR YAML file that you used to configure log forwarding. Verify that it does not have an- outputRefselement that specifies- default. For example:- outputRefs: - default - outputRefs: - default- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
						Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss.
					
Procedure
- Edit the - ClusterLoggingcustom resource (CR) in the- openshift-loggingproject:- oc edit ClusterLogging instance - $ oc edit ClusterLogging instance- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 
							If they are present, remove the logStoreandvisualizationstanzas from theClusterLoggingCR.
- Preserve the - collectionstanza of the- ClusterLoggingCR. The result should look similar to the following example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the collector pods are redeployed: - oc get pods -l component=collector -n openshift-logging - $ oc get pods -l component=collector -n openshift-logging- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow