This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.3.3. Configuring the log store
OpenShift Container Platform uses Elasticsearch 6 (ES) to store and organize the log data.
You can make modifications to your log store, including:
- storage for your Elasticsearch cluster
- shard replication across data nodes in the cluster, from full replication to no replication
- external access to Elasticsearch data
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory.
Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments.
3.3.1. Forward audit logs to the log store 复制链接链接已复制到粘贴板!
Because the internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs, by default audit logs are not stored in the internal Elasticsearch instance.
If you want to send the audit logs to the internal log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API. The Log Fowarding API is currently a Technology Preview feature.
The internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OpenShift Container Platform cluster logging does not comply with those regulations.
Procedure
To use the Log Forwarding API to forward audit logs to the internal Elasticsearch instance:
If the Log Forwarding API is not enabled:
Edit the
ClusterLoggingcustom resource (CR) in theopenshift-loggingproject:oc edit ClusterLogging instance
$ oc edit ClusterLogging instanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
clusterlogging.openshift.io/logforwardingtechpreviewannotation and set toenabled:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
LogForwardingCR YAML file or edit your existing CR:Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意You must configure a pipeline and output for all three types of logs: application, infrastructure, and audit. If you do not specify a pipeline and output for a log type, those logs are not stored and will be lost.
If you have an existing
LogForwardingCR, add an output for the internal Elasticsearch instance and a pipeline to that output for the audit logs. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
For more information on the Log Forwarding API, see Forwarding logs using the Log Forwarding API.
3.3.2. Configuring log retention time 复制链接链接已复制到粘贴板!
You can specify how long the default Elasticsearch log store keeps indices using a separate retention policy for each of the three log sources: infrastructure logs, application logs, and audit logs. The retention policy, which you configure using the maxAge parameter in the Cluster Logging Custom Resource (CR), is considered for the Elasticsearch roll over schedule and determines when Elasticsearch deletes the rolled-over indices.
Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions:
-
The index is older than the
rollover.maxAgevalue in theElasticsearchCR. - The index size is greater than 40 GB × the number of primary shards.
- The index doc count is greater than 40960 KB × the number of primary shards.
Elasticsearch deletes the rolled-over indices are deleted based on the retention policy you configure.
If you do not create a retention policy for any of the log sources, logs are deleted after seven days by default.
If you do not specify a retention policy for all three log sources, only logs from the sources with a retention policy are stored. For example, if you set a retention policy for the infrastructure and applicaiton logs, but do not set a retention policy for audit logs, the audit logs will not be retained and there will be no audit- index in Elasticsearch or Kibana.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
To configure the log retention time:
Edit the
ClusterLoggingCR to add or modify theretentionPolicyparameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example,
1dfor one day. Logs older than themaxAgeare deleted. By default, logs are retained for seven days.
You can verify the settings in the
Elasticsearchcustom resource (CR).For example, the Cluster Logging Operator updated the following
ElasticsearchCR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-ver indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For each log source, the retention policy indicates when to delete and rollover logs for that source.
- 2
- When OpenShift Container Platform deletes the rolled-over indices. This setting is the
maxAgeyou set in theClusterLoggingCR. - 3
- The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from the
maxAgeyou set in theClusterLoggingCR. - 4
- When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed.
注意Modifying the
ElasticsearchCR is not supported. All changes to the retention policies must be made in theClusterLoggingCR.The Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the
pollInterval.oc get cronjobs
$ oc get cronjobsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the Elasticsearch Operator sets values sufficient for your environment.
In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy.
Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Edit the
ClusterLoggingcustom resource (CR) in theopenshift-loggingproject:oc edit ClusterLogging instance
$ oc edit ClusterLogging instanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are
16Gifor the memory request and1for the CPU request. - 2
- Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are
256Mifor the memory request and100mfor the CPU request.
If you adjust the amount of Elasticsearch memory, you must change both the request value and the limit value.
For example:
Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the CPU and memory you want, assuming the node has the CPU and memory available.
3.3.4. Configuring replication policy for the log store 复制链接链接已复制到粘贴板!
You can define how Elasticsearch shards are replicated across data nodes in the cluster.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Edit the
ClusterLoggingcustom resource (CR) in theopenshift-loggingproject:oc edit clusterlogging instance
$ oc edit clusterlogging instanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a redundancy policy for the shards. The change is applied upon saving the changes.
- FullRedundancy. Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance.
- MultipleRedundancy. Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance.
- SingleRedundancy. Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node.
- ZeroRedundancy. Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy.
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
3.3.5. Scaling down Elasticsearch pods 复制链接链接已复制到粘贴板!
Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation.
If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green, you can scale down by another pod.
If your Elasticsearch cluster is set to ZeroRedundancy, you should not scale down your Elasticsearch pods.
3.3.6. Configuring persistent storage for the log store 复制链接链接已复制到粘贴板!
Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance.
Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Edit the
ClusterLoggingCR to specify that each data node in the cluster is bound to a Persistent Volume Claim.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage.
If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes.
3.3.7. Configuring the log store for emptyDir storage 复制链接链接已复制到粘贴板!
You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod’s data is lost upon restart.
When using emptyDir, if log storage is restarted or redeployed, you will lose data.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Edit the
ClusterLoggingCR to specify emptyDir:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Perform a rolling restart when you change the elasticsearch configmap or any of the elasticsearch-* deployment configurations.
Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
- Install the OpenShift Container Platform es_util tool
Procedure
To perform a rolling cluster restart:
Change to the
openshift-loggingproject:oc project openshift-logging
$ oc project openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the names of the Elasticsearch pods:
oc get pods | grep elasticsearch-
$ oc get pods | grep elasticsearch-Copy to Clipboard Copied! Toggle word wrap Toggle overflow Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down:
oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST
$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOSTCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_flush/synced" -XPOST
$ oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_flush/synced" -XPOSTCopy to Clipboard Copied! Toggle word wrap Toggle overflow {"_shards":{"total":4,"successful":4,"failed":0},".security":{"total":2,"successful":2,"failed":0},".kibana_1":{"total":2,"successful":2,"failed":0}}{"_shards":{"total":4,"successful":4,"failed":0},".security":{"total":2,"successful":2,"failed":0},".kibana_1":{"total":2,"successful":2,"failed":0}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool:
oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient":{"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient":Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the command is complete, for each deployment you have for an ES cluster:
By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes:
oc rollout resume deployment/<deployment-name>
$ oc rollout resume deployment/<deployment-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rollout resume deployment/elasticsearch-cdm-0-1
$ oc rollout resume deployment/elasticsearch-cdm-0-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow deployment.extensions/elasticsearch-cdm-0-1 resumed
deployment.extensions/elasticsearch-cdm-0-1 resumedCopy to Clipboard Copied! Toggle word wrap Toggle overflow A new pod is deployed. After the pod has a ready container, you can move on to the next deployment.
oc get pods | grep elasticsearch-
$ oc get pods | grep elasticsearch-Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h
NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22hCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the deployments are complete, reset the pod to disallow rollouts:
oc rollout pause deployment/<deployment-name>
$ oc rollout pause deployment/<deployment-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rollout pause deployment/elasticsearch-cdm-0-1
$ oc rollout pause deployment/elasticsearch-cdm-0-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow deployment.extensions/elasticsearch-cdm-0-1 paused
deployment.extensions/elasticsearch-cdm-0-1 pausedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the Elasticsearch cluster is in a
greenoryellowstate:oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true
$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow 注意If you performed a rollout on the Elasticsearch pod you used in the previous commands, the pod no longer exists and you need a new pod name here.
For example:
oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true
$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Make sure this parameter value is
greenoryellowbefore proceeding.
- If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod.
After all the deployments for the cluster have been rolled out, re-enable shard balancing:
oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.9. Exposing the log store service as a route 复制链接链接已复制到粘贴板!
By default, the log store that is deployed with cluster logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data.
Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains:
-
The
Authorization: Bearer ${token} - The Elasticsearch reencrypt route and an Elasticsearch API request.
Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands:
oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging
$ oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging
Example output
172.30.183.229
172.30.183.229
oc get service elasticsearch -n openshift-logging
$ oc get service elasticsearch -n openshift-logging
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h
You can check the cluster IP address with a command similar to the following:
oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://172.30.183.229:9200/_cat/health"
$ oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://172.30.183.229:9200/_cat/health"
Example output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108
Prerequisites
- Cluster logging and Elasticsearch must be installed.
- You must have access to the project in order to be able to access to the logs.
Procedure
To expose the log store externally:
Change to the
openshift-loggingproject:oc project openshift-logging
$ oc project openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the CA certificate from the log store and write to the admin-ca file:
oc extract secret/elasticsearch --to=. --keys=admin-ca
$ oc extract secret/elasticsearch --to=. --keys=admin-caCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
admin-ca
admin-caCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the route for the log store service as a YAML file:
Create a YAML file with the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the log store CA certifcate or use the command in the next step. You do not have to set the
spec.tls.key,spec.tls.certificate, andspec.tls.caCertificateparameters required by some reencrypt routes.
Run the following command to add the log store CA certificate to the route YAML you created:
cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml
$ cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the route:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
route.route.openshift.io/elasticsearch created
route.route.openshift.io/elasticsearch createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check that the Elasticsearch service is exposed:
Get the token of this service account to be used in the request:
token=$(oc whoami -t)
$ token=$(oc whoami -t)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the elasticsearch route you created as an environment variable.
routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`$ routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route:
curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}"curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The response appears similar to the following:
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow