Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 3. Logging 5.6
3.1. Logging 5.6 Release Notes
The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
The stable
channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X
where X
is the version of logging you have installed.
3.1.1. Logging 5.6.11
This release includes OpenShift Logging Bug Fix Release 5.6.11.
3.1.1.1. Bug fixes
- Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. (LOG-4435)
3.1.1.2. CVEs
3.1.2. Logging 5.6.8
This release includes OpenShift Logging Bug Fix Release 5.6.8.
3.1.2.1. Bug fixes
-
Before this update, the vector collector terminated unexpectedly when input match label values contained a
/
character within theClusterLogForwarder
. This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. (LOG-4091) - Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. (OU-187)
- Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. (OU-189)
- Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. (LOG-4158)
-
Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the
http.max_header_size
setting. With this update, the default value for header size has been increased, resolving the issue. (LOG-4278)
3.1.2.2. CVEs
3.1.3. Logging 5.6.7
This release includes OpenShift Logging Bug Fix Release 5.6.7.
3.1.3.1. Bug fixes
- Before this update, the LokiStack gateway returned label values for namespaces without applying the access rights of a user. With this update, the LokiStack gateway applies permissions to label value requests, resolving the issue. (LOG-3728)
-
Before this update, the
time
field of log messages did not parse asstructured.time
by default in Fluentd when the messages included a timestamp. With this update, parsed log messages will include astructured.time
field if the output destination supports it. (LOG-4090) -
Before this update, the LokiStack route configuration caused queries running longer than 30 seconds to time out. With this update, the LokiStack global and per-tenant
queryTimeout
settings affect the route timeout settings, resolving the issue. (LOG-4130) - Before this update, LokiStack CRs with values defined for tenant limits but not global limits caused the Loki Operator to crash. With this update, the Operator is able to process LokiStack CRs with only tenant limits defined, resolving the issue. (LOG-4199)
- Before this update, the OpenShift Container Platform web console generated errors after an upgrade due to cached files of the prior version retained by the web browser. With this update, these files are no longer cached, resolving the issue. (LOG-4099)
- Before this update, Vector generated certificate errors when forwarding to the default Loki instance. With this update, logs can be forwarded without errors to Loki by using Vector. (LOG-4184)
Before this update, the Cluster Logging Operator API required a certificate to be provided by a secret when the
tls.insecureSkipVerify
option was set totrue
. With this update, the Cluster Logging Operator API no longer requires a certificate to be provided by a secret in such cases. The following configuration has been added to the Operator’s CR:tls.verify_certificate = false tls.verify_hostname = false
(LOG-4146)
3.1.3.2. CVEs
- CVE-2021-26341
- CVE-2021-33655
- CVE-2021-33656
- CVE-2022-1462
- CVE-2022-1679
- CVE-2022-1789
- CVE-2022-2196
- CVE-2022-2663
- CVE-2022-3028
- CVE-2022-3239
- CVE-2022-3522
- CVE-2022-3524
- CVE-2022-3564
- CVE-2022-3566
- CVE-2022-3567
- CVE-2022-3619
- CVE-2022-3623
- CVE-2022-3625
- CVE-2022-3627
- CVE-2022-3628
- CVE-2022-3707
- CVE-2022-3970
- CVE-2022-4129
- CVE-2022-20141
- CVE-2022-25147
- CVE-2022-25265
- CVE-2022-30594
- CVE-2022-36227
- CVE-2022-39188
- CVE-2022-39189
- CVE-2022-41218
- CVE-2022-41674
- CVE-2022-42703
- CVE-2022-42720
- CVE-2022-42721
- CVE-2022-42722
- CVE-2022-43750
- CVE-2022-47929
- CVE-2023-0394
- CVE-2023-0461
- CVE-2023-1195
- CVE-2023-1582
- CVE-2023-2491
- CVE-2023-22490
- CVE-2023-23454
- CVE-2023-23946
- CVE-2023-25652
- CVE-2023-25815
- CVE-2023-27535
- CVE-2023-29007
3.1.4. Logging 5.6.6
This release includes OpenShift Logging Bug Fix Release 5.6.6.
3.1.4.1. Bug fixes
-
Before this update, dropping of messages occurred when configuring the
ClusterLogForwarder
custom resource to write to a Kafka output topic that matched a key in the payload due to an error. With this update, the issue is resolved by prefixing Fluentd’s buffer name with an underscore. (LOG-3458) - Before this update, premature closure of watches occurred in Fluentd when inodes were reused and there were multiple entries with the same inode. With this update, the issue of premature closure of watches in the Fluentd position file is resolved. (LOG-3629)
- Before this update, the detection of JavaScript client multi-line exceptions by Fluentd failed, resulting in printing them as multiple lines. With this update, exceptions are output as a single line, resolving the issue.(LOG-3761)
- Before this update, direct upgrades from the Red Hat Openshift Logging Operator version 4.6 to version 5.6 were allowed, resulting in functionality issues. With this update, upgrades must be within two versions, resolving the issue. (LOG-3837)
- Before this update, metrics were not displayed for Splunk or Google Logging outputs. With this update, the issue is resolved by sending metrics for HTTP endpoints.(LOG-3932)
-
Before this update, when the
ClusterLogForwarder
custom resource was deleted, collector pods remained running. With this update, collector pods do not run when log forwarding is not enabled. (LOG-4030) - Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. (LOG-4101)
- Before this update, Fluentd hash values for watch files were generated using the paths to log files, resulting in a non unique hash upon log rotation. With this update, hash values for watch files are created with inode numbers, resolving the issue. (LOG-3633)
- Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the Show Resources link to toggle the display of resources for each log entry. (LOG-4118)
3.1.4.2. CVEs
3.1.5. Logging 5.6.5
This release includes OpenShift Logging Bug Fix Release 5.6.5.
3.1.5.1. Bug fixes
- Before this update, the template definitions prevented Elasticsearch from indexing some labels and namespace_labels, causing issues with data ingestion. With this update, the fix replaces dots and slashes in labels to ensure proper ingestion, effectively resolving the issue. (LOG-3419)
- Before this update, if the Logs page of the OpenShift Web Console failed to connect to the LokiStack, a generic error message was displayed, providing no additional context or troubleshooting suggestions. With this update, the error message has been enhanced to include more specific details and recommendations for troubleshooting. (LOG-3750)
- Before this update, time range formats were not validated, leading to errors selecting a custom date range. With this update, time formats are now validated, enabling users to select a valid range. If an invalid time range format is selected, an error message is displayed to the user. (LOG-3583)
- Before this update, when searching logs in Loki, even if the length of an expression did not exceed 5120 characters, the query would fail in many cases. With this update, query authorization label matchers have been optimized, resolving the issue. (LOG-3480)
- Before this update, the Loki Operator failed to produce a memberlist configuration that was sufficient for locating all the components when using a memberlist for private IPs. With this update, the fix ensures that the generated configuration includes the advertised port, allowing for successful lookup of all components. (LOG-4008)
3.1.5.2. CVEs
3.1.6. Logging 5.6.4
This release includes OpenShift Logging Bug Fix Release 5.6.4.
3.1.6.1. Bug fixes
- Before this update, when LokiStack was deployed as the log store, the logs generated by Loki pods were collected and sent to LokiStack. With this update, the logs generated by Loki are excluded from collection and will not be stored. (LOG-3280)
- Before this update, when the query editor on the Logs page of the OpenShift Web Console was empty, the drop-down menus did not populate. With this update, if an empty query is attempted, an error message is displayed and the drop-down menus now populate as expected. (LOG-3454)
-
Before this update, when the
tls.insecureSkipVerify
option was set totrue
, the Cluster Logging Operator would generate incorrect configuration. As a result, the operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Cluster Logging Operator generates the correct TLS configuration even whentls.insecureSkipVerify
is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3475) - Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received messages now have structured messages included, even when they are forwarded to multiple destinations. (LOG-3640)
-
Before this update, if the
collection
field contained{}
it could result in the Operator crashing. With this update, the Operator will ignore this value, allowing the operator to continue running smoothly without interruption. (LOG-3733) -
Before this update, the
nodeSelector
attribute for the Gateway component of LokiStack did not have any effect. With this update, thenodeSelector
attribute functions as expected. (LOG-3783) - Before this update, the static LokiStack memberlist configuration relied solely on private IP networks. As a result, when the OpenShift Container Platform cluster pod network was configured with a public IP range, the LokiStack pods would crashloop. With this update, the LokiStack administrator now has the option to use the pod network for the memberlist configuration. This resolves the issue and prevents the LokiStack pods from entering a crashloop state when the OpenShift Container Platform cluster pod network is configured with a public IP range. (LOG-3814)
-
Before this update, if the
tls.insecureSkipVerify
field was set totrue
, the Cluster Logging Operator would generate an incorrect configuration. As a result, the Operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Operator generates the correct TLS configuration even whentls.insecureSkipVerify
is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3838) - Before this update, if the Cluster Logging Operator (CLO) was installed without the Elasticsearch Operator, the CLO pod would continuously display an error message related to the deletion of Elasticsearch. With this update, the CLO now performs additional checks before displaying any error messages. As a result, error messages related to Elasticsearch deletion are no longer displayed in the absence of the Elasticsearch Operator.(LOG-3763)
3.1.6.2. CVEs
3.1.7. Logging 5.6.3
This release includes OpenShift Logging Bug Fix Release 5.6.3.
3.1.7.1. Bug fixes
- Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. (LOG-3717)
-
Before this update, the Fluentd collector did not capture OAuth login events stored in
/var/log/auth-server/audit.log
. With this update, Fluentd captures these OAuth login events, resolving the issue. (LOG-3729)
3.1.7.2. CVEs
3.1.8. Logging 5.6.2
This release includes OpenShift Logging Bug Fix Release 5.6.2.
3.1.8.1. Bug fixes
-
Before this update, the collector did not set
level
fields correctly based on priority for systemd logs. With this update,level
fields are set correctly. (LOG-3429) - Before this update, the Operator incorrectly generated incompatibility warnings on OpenShift Container Platform 4.12 or later. With this update, the Operator max OpenShift Container Platform version value has been corrected, resolving the issue. (LOG-3584)
-
Before this update, creating a
ClusterLogForwarder
custom resource (CR) with an output value ofdefault
did not generate any errors. With this update, an error warning that this value is invalid generates appropriately. (LOG-3437) -
Before this update, when the
ClusterLogForwarder
custom resource (CR) had multiple pipelines configured with one output set asdefault
, the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. (LOG-3559) - Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. (LOG-3608)
- Before this update, patch releases removed previous versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that previous releases of the same minor version stay in the catalog. (LOG-3635)
3.1.8.2. CVEs
3.1.9. Logging 5.6.1
This release includes OpenShift Logging Bug Fix Release 5.6.1.
3.1.9.1. Bug fixes
- Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. (LOG-3494)
-
Before this update, the Loki Operator would not retry setting the status of the
LokiStack
CR, which caused stale status information. With this update, the Operator retries status information updates on conflict. (LOG-3496) -
Before this update, the Loki Operator Webhook server caused TLS errors when the
kube-apiserver-operator
Operator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. (LOG-3510) - Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. (LOG-3441), (LOG-3397)
- Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. (LOG-3463)
-
Before this update, the
Red Hat OpenShift Logging
Operator was not available for OpenShift Container Platform 4.10 clusters because of an incompatibility between OpenShift Container Platform console and the logging-view-plugin. With this update, the plugin is properly integrated with the OpenShift Container Platform 4.10 admin console. (LOG-3447) -
Before this update the reconciliation of the
ClusterLogForwarder
custom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.(LOG-3477)
3.1.9.2. CVEs
3.1.10. Logging 5.6.0
This release includes OpenShift Logging Release 5.6.
3.1.10.1. Deprecation notice
In logging version 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
3.1.10.2. Enhancements
- With this update, Logging is compliant with OpenShift Container Platform cluster-wide cryptographic policies. (LOG-895)
- With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. (LOG-2695)
- With this update, Splunk is an available output option for log forwarding. (LOG-2913)
- With this update, Vector replaces Fluentd as the default Collector. (LOG-2222)
- With this update, the Developer role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running OpenShift Container Platform 4.11 and higher. (LOG-3388)
-
With this update, logs from any source contain a field
openshift.cluster_id
, the unique identifier of the cluster in which the Operator is deployed. You can view theclusterID
value with the command below. (LOG-2715)
$ oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}'
3.1.10.3. Known Issues
-
Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the
.
character. This fixes the limitation of Elasticsearch by replacing.
in the label keys with_
. As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. (LOG-3463)
3.1.10.4. Bug fixes
- Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. (LOG-2993)
- Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. (LOG-3072)
-
Before this update, the Operator removed any custom outputs defined in the
ClusterLogForwarder
custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing theClusterLogForwarder
custom resource. (LOG-3090) - Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. (LOG-3331)
-
Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a
ReplicationFactor
of1
. With this update, the operator sets the actual value for the size used. (LOG-3296) -
Before this update, Vector parsed the message field when JSON parsing was enabled without also defining
structuredTypeKey
orstructuredTypeName
values. With this update, a value is required for eitherstructuredTypeKey
orstructuredTypeName
when writing structured logs to Elasticsearch. (LOG-3195) - Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. (LOG-3161)
- Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. (LOG-3157)
-
Before this update, Kibana had a fixed
24h
OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever theaccessTokenInactivityTimeout
field was set to a value lower than24h
. With this update, Kibana’s OAuth cookie expiration time synchronizes to theaccessTokenInactivityTimeout
, with a default value of24h
. (LOG-3129) - Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. (LOG-2919)
-
Before this update, the
.level
and`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. (LOG-2819) - Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. (LOG-2789)
- Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. (LOG-2315)
- Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. (LOG-1806)
-
Before this update, the
must-gather
script did not complete becauseoc
needs a folder with write permission to build its cache. With this update,oc
has write permissions to a folder, and themust-gather
script completes successfully. (LOG-3446) - Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. (LOG-3235)
-
Before this update, Vector was missing the field
sequence
, which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the fieldopenshift.sequence
has been added to the event logs. (LOG-3106)
3.1.10.5. CVEs
3.2. Getting started with logging 5.6
This overview of the logging deployment process is provided for ease of reference. It is not a substitute for full documentation. For new installations, Vector and LokiStack are recommended.
As of logging version 5.5, you have the option of choosing from Fluentd or Vector collector implementations, and Elasticsearch or LokiStack as log stores. Documentation for logging is in the process of being updated to reflect these underlying component changes.
The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
Prerequisites
- LogStore preference: Elasticsearch or LokiStack
- Collector implementation preference: Fluentd or Vector
- Credentials for your log forwarding outputs
As of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
Install the Operator for the logstore you’d like to use.
- For Elasticsearch, install the OpenShift Elasticsearch Operator.
For LokiStack, install the Loki Operator.
-
Create a
LokiStack
custom resource (CR) instance.
-
Create a
- Install the Red Hat OpenShift Logging Operator.
Create a
ClusterLogging
custom resource (CR) instance.Select your Collector Implementation.
NoteAs of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
-
Create a
ClusterLogForwarder
custom resource (CR) instance. - Create a secret for the selected output pipeline.
3.3. Understanding logging
The logging subsystem consists of these logical components:
-
Collector
- Reads container log data from each node and forwards log data to configured outputs. -
Store
- Stores log data for analysis; the default output for the forwarder. -
Visualization
- Graphical interface for searching, querying, and viewing stored logs.
These components are managed by Operators and Custom Resource (CR) YAML files.
The logging subsystem for Red Hat OpenShift collects container logs and node logs. These are categorized into types:
-
application
- Container logs generated by non-infrastructure containers. -
infrastructure
- Container logs from namespaceskube-*
andopenshift-\*
, and node logs fromjournald
. -
audit
- Logs fromauditd
,kube-apiserver
,openshift-apiserver
, andovn
if enabled.
The logging collector is a daemonset that deploys pods to each OpenShift Container Platform node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform.
Container logs are generated by containers running in pods running on the cluster. Each container generates a separate log stream. The collector collects the logs from these sources and forwards them internally or externally as configured in the ClusterLogForwarder
custom resource.
3.4. Administering your logging deployment
3.4.1. Deploying Red Hat OpenShift Logging Operator using the web console
You can use the OpenShift Container Platform web console to deploy the Red Hat OpenShift Logging Operator.
The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
Procedure
To deploy the Red Hat OpenShift Logging Operator using the OpenShift Container Platform web console:
Install the Red Hat OpenShift Logging Operator:
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Type Logging in the Filter by keyword field.
- Choose Red Hat OpenShift Logging from the list of available Operators, and click Install.
Select stable or stable-5.y as the Update Channel.
NoteThe
stable
channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel tostable-X
whereX
is the version of logging you have installed.- Ensure that A specific namespace on the cluster is selected under Installation Mode.
- Ensure that Operator recommended namespace is openshift-logging under Installed Namespace.
- Select Enable Operator recommended cluster monitoring on this Namespace.
Select an option for Update approval.
- The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
- The Manual option requires a user with appropriate credentials to approve the Operator update.
- Select Enable or Disable for the Console plugin.
- Click Install.
-
In the OpenShift Container Platform web console, click Operators
Verify that the Red Hat OpenShift Logging Operator is installed by switching to the Operators
Installed Operators page. - Ensure that Red Hat OpenShift Logging is listed in the openshift-logging project with a Status of Succeeded.
Create a ClusterLogging instance.
NoteThe form view of the web console does not include all available options. The YAML view is recommended for completing your setup.
In the collection section, select a Collector Implementation.
NoteAs of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
In the logStore section, select a type.
NoteAs of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
- Click Create.
3.4.2. Deploying the Loki Operator using the web console
You can use the OpenShift Container Platform web console to install the Loki Operator.
Prerequisites
- Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
Procedure
To install the Loki Operator using the OpenShift Container Platform web console:
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. Type Loki in the Filter by keyword field.
- Choose Loki Operator from the list of available Operators, and click Install.
Select stable or stable-5.y as the Update Channel.
NoteThe
stable
channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel tostable-X
whereX
is the version of logging you have installed.- Ensure that All namespaces on the cluster is selected under Installation Mode.
- Ensure that openshift-operators-redhat is selected under Installed Namespace.
Select Enable Operator recommended cluster monitoring on this Namespace.
This option sets the
openshift.io/cluster-monitoring: "true"
label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes theopenshift-operators-redhat
namespace.Select an option for Update approval.
- The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
- The Manual option requires a user with appropriate credentials to approve the Operator update.
- Click Install.
Verify that the LokiOperator installed by switching to the Operators
Installed Operators page. - Ensure that LokiOperator is listed with Status as Succeeded in all the projects.
Create a
Secret
YAML file that uses theaccess_key_id
andaccess_key_secret
fields to specify your credentials andbucketnames
,endpoint
, andregion
to define the object storage location. AWS is used in the following example:apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1
Select Create instance under LokiStack on the Details tab. Then select YAML view. Paste in the following template, subsituting values where appropriate.
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging
- 1
- Name should be
logging-loki
. - 2
- Select your Loki deployment size.
- 3
- Define the secret used for your log storage.
- 4
- Define corresponding storage type.
- 5
- Enter the name of an existing storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed using
oc get storageclasses
.Apply the configuration:
oc apply -f logging-loki.yaml
Create or edit a
ClusterLogging
CR:apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector
Apply the configuration:
oc apply -f cr-lokistack.yaml
3.4.3. Installing from OperatorHub using the CLI
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc
command to create or update a Subscription
object.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
Install the
oc
command to your local system.
Procedure
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplace
Example output
NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ...
Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplace
An Operator group, defined by an
OperatorGroup
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespaces
orSingleNamespace
mode. If the Operator you intend to install uses theAllNamespaces
, then theopenshift-operators
namespace already has an appropriate Operator group in place.However, if the Operator uses the
SingleNamespace
mode and you do not already have an appropriate Operator group in place, you must create one.NoteThe web console version of this procedure handles the creation of the
OperatorGroup
andSubscription
objects automatically behind the scenes for you when choosingSingleNamespace
mode.Create an
OperatorGroup
object YAML file, for exampleoperatorgroup.yaml
:Example
OperatorGroup
objectapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>
Create the
OperatorGroup
object:$ oc apply -f operatorgroup.yaml
Create a
Subscription
object YAML file to subscribe a namespace to an Operator, for examplesub.yaml
:Example
Subscription
objectapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar
- 1
- For
AllNamespaces
install mode usage, specify theopenshift-operators
namespace. Otherwise, specify the relevant single namespace forSingleNamespace
install mode usage. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplace
for the default OperatorHub catalog sources. - 6
- The
env
parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. - 7
- The
envFrom
parameter defines a list of sources to populate Environment Variables in the container. - 8
- The
volumes
parameter defines a list of Volumes that must exist on the pod created by OLM. - 9
- The
volumeMounts
parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If avolumeMount
references avolume
that does not exist, OLM fails to deploy the Operator. - 10
- The
tolerations
parameter defines a list of Tolerations for the pod created by OLM. - 11
- The
resources
parameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelector
parameter defines aNodeSelector
for the pod created by OLM.
Create the
Subscription
object:$ oc apply -f sub.yaml
At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
3.4.4. Deleting Operators from a cluster using the web console
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
Access to an OpenShift Container Platform cluster web console using an account with
cluster-admin
permissions.
Procedure
-
Navigate to the Operators
Installed Operators page. - Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
3.4.5. Deleting Operators from a cluster using the CLI
Cluster administrators can delete installed Operators from a selected namespace by using the CLI.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
oc
command installed on workstation.
Procedure
Check the current version of the subscribed Operator (for example,
jaeger
) in thecurrentCSV
field:$ oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV
Example output
currentCSV: jaeger-operator.v1.8.2
Delete the subscription (for example,
jaeger
):$ oc delete subscription jaeger -n openshift-operators
Example output
subscription.operators.coreos.com "jaeger" deleted
Delete the CSV for the Operator in the target namespace using the
currentCSV
value from the previous step:$ oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators
Example output
clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted
3.5. Logging References
3.5.1. Collector features
Output | Protocol | Tested with | Fluentd | Vector |
---|---|---|---|---|
Cloudwatch | REST over HTTP(S) | ✓ | ✓ | |
Elasticsearch v6 | v6.8.1 | ✓ | ✓ | |
Elasticsearch v7 | v7.12.2, 7.17.7 | ✓ | ✓ | |
Elasticsearch v8 | v8.4.3 | ✓ | ||
Fluent Forward | Fluentd forward v1 | Fluentd 1.14.6, Logstash 7.10.1 | ✓ | |
Google Cloud Logging | ✓ | |||
HTTP | HTTP 1.1 | Fluentd 1.14.6, Vector 0.21 | ||
Kafka | Kafka 0.11 | Kafka 2.4.1, 2.7.0, 3.3.1 | ✓ | ✓ |
Loki | REST over HTTP(S) | Loki 2.3.0, 2.7 | ✓ | ✓ |
Splunk | HEC | v8.2.9, 9.0.0 | ✓ | |
Syslog | RFC3164, RFC5424 | Rsyslog 8.37.0-9.el7 | ✓ |
Feature | Fluentd | Vector |
---|---|---|
App container logs | ✓ | ✓ |
App-specific routing | ✓ | ✓ |
App-specific routing by namespace | ✓ | ✓ |
Infra container logs | ✓ | ✓ |
Infra journal logs | ✓ | ✓ |
Kube API audit logs | ✓ | ✓ |
OpenShift API audit logs | ✓ | ✓ |
Open Virtual Network (OVN) audit logs | ✓ | ✓ |
Feature | Fluentd | Vector |
---|---|---|
Elasticsearch certificates | ✓ | ✓ |
Elasticsearch username / password | ✓ | ✓ |
Cloudwatch keys | ✓ | ✓ |
Cloudwatch STS | ✓ | ✓ |
Kafka certificates | ✓ | ✓ |
Kafka username / password | ✓ | ✓ |
Kafka SASL | ✓ | ✓ |
Loki bearer token | ✓ | ✓ |
Feature | Fluentd | Vector |
---|---|---|
Viaq data model - app | ✓ | ✓ |
Viaq data model - infra | ✓ | ✓ |
Viaq data model - infra(journal) | ✓ | ✓ |
Viaq data model - Linux audit | ✓ | ✓ |
Viaq data model - kube-apiserver audit | ✓ | ✓ |
Viaq data model - OpenShift API audit | ✓ | ✓ |
Viaq data model - OVN | ✓ | ✓ |
Loglevel Normalization | ✓ | ✓ |
JSON parsing | ✓ | ✓ |
Structured Index | ✓ | ✓ |
Multiline error detection | ✓ | |
Multicontainer / split indices | ✓ | ✓ |
Flatten labels | ✓ | ✓ |
CLF static labels | ✓ | ✓ |
Feature | Fluentd | Vector |
---|---|---|
Fluentd readlinelimit | ✓ | |
Fluentd buffer | ✓ | |
- chunklimitsize | ✓ | |
- totallimitsize | ✓ | |
- overflowaction | ✓ | |
- flushthreadcount | ✓ | |
- flushmode | ✓ | |
- flushinterval | ✓ | |
- retrywait | ✓ | |
- retrytype | ✓ | |
- retrymaxinterval | ✓ | |
- retrytimeout | ✓ |
Feature | Fluentd | Vector |
---|---|---|
Metrics | ✓ | ✓ |
Dashboard | ✓ | ✓ |
Alerts | ✓ |
Feature | Fluentd | Vector |
---|---|---|
Global proxy support | ✓ | ✓ |
x86 support | ✓ | ✓ |
ARM support | ✓ | ✓ |
IBM Power support | ✓ | ✓ |
IBM Z support | ✓ | ✓ |
IPv6 support | ✓ | ✓ |
Log event buffering | ✓ | |
Disconnected Cluster | ✓ | ✓ |
Additional resources
3.5.2. Logging 5.6 API reference
3.5.2.1. ClusterLogForwarder
ClusterLogForwarder is an API to configure forwarding logs.
You configure forwarding by specifying a list of pipelines
, which forward from a set of named inputs to a set of named outputs.
There are built-in input names for common log categories, and you can define custom inputs to do additional filtering.
There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster.
For more details see the documentation on the API fields.
Property | Type | Description |
---|---|---|
spec | object | Specification of the desired behavior of ClusterLogForwarder |
status | object | Status of the ClusterLogForwarder |
3.5.2.1.1. .spec
3.5.2.1.1.1. Description
ClusterLogForwarderSpec defines how logs should be forwarded to remote targets.
3.5.2.1.1.1.1. Type
- object
Property | Type | Description |
---|---|---|
inputs | array | (optional) Inputs are named filters for log messages to be forwarded. |
outputDefaults | object | (optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store. |
outputs | array | (optional) Outputs are named destinations for log messages. |
pipelines | array | Pipelines forward the messages selected by a set of inputs to a set of outputs. |
3.5.2.1.2. .spec.inputs[]
3.5.2.1.2.1. Description
InputSpec defines a selector of log messages.
3.5.2.1.2.1.1. Type
- array
Property | Type | Description |
---|---|---|
application | object |
(optional) Application, if present, enables named set of |
name | string |
Name used to refer to the input of a |
3.5.2.1.3. .spec.inputs[].application
3.5.2.1.3.1. Description
Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs.
3.5.2.1.3.1.1. Type
- object
Property | Type | Description |
---|---|---|
namespaces | array | (optional) Namespaces from which to collect application logs. |
selector | object | (optional) Selector for logs from pods with matching labels. |
3.5.2.1.4. .spec.inputs[].application.namespaces[]
3.5.2.1.4.1. Description
3.5.2.1.4.1.1. Type
- array
3.5.2.1.5. .spec.inputs[].application.selector
3.5.2.1.5.1. Description
A label selector is a label query over a set of resources.
3.5.2.1.5.1.1. Type
- object
Property | Type | Description |
---|---|---|
matchLabels | object | (optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels |
3.5.2.1.6. .spec.inputs[].application.selector.matchLabels
3.5.2.1.6.1. Description
3.5.2.1.6.1.1. Type
- object
3.5.2.1.7. .spec.outputDefaults
3.5.2.1.7.1. Description
3.5.2.1.7.1.1. Type
- object
Property | Type | Description |
---|---|---|
elasticsearch | object | (optional) Elasticsearch OutputSpec default values |
3.5.2.1.8. .spec.outputDefaults.elasticsearch
3.5.2.1.8.1. Description
ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index
3.5.2.1.8.1.1. Type
- object
Property | Type | Description |
---|---|---|
enableStructuredContainerLogs | bool | (optional) EnableStructuredContainerLogs enables multi-container structured logs to allow |
structuredTypeKey | string | (optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index |
structuredTypeName | string | (optional) StructuredTypeName specifies the name of elasticsearch schema |
3.5.2.1.9. .spec.outputs[]
3.5.2.1.9.1. Description
Output defines a destination for log messages.
3.5.2.1.9.1.1. Type
- array
Property | Type | Description |
---|---|---|
syslog | object | (optional) |
fluentdForward | object | (optional) |
elasticsearch | object | (optional) |
kafka | object | (optional) |
cloudwatch | object | (optional) |
loki | object | (optional) |
googleCloudLogging | object | (optional) |
splunk | object | (optional) |
name | string |
Name used to refer to the output from a |
secret | object | (optional) Secret for authentication. |
tls | object | TLS contains settings for controlling options on TLS client connections. |
type | string | Type of output plugin. |
url | string | (optional) URL to send log records to. |
3.5.2.1.10. .spec.outputs[].secret
3.5.2.1.10.1. Description
OutputSecretSpec is a secret reference containing name only, no namespace.
3.5.2.1.10.1.1. Type
- object
Property | Type | Description |
---|---|---|
name | string | Name of a secret in the namespace configured for log forwarder secrets. |
3.5.2.1.11. .spec.outputs[].tls
3.5.2.1.11.1. Description
OutputTLSSpec contains options for TLS connections that are agnostic to the output type.
3.5.2.1.11.1.1. Type
- object
Property | Type | Description |
---|---|---|
insecureSkipVerify | bool | If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates. |
3.5.2.1.12. .spec.pipelines[]
3.5.2.1.12.1. Description
PipelinesSpec link a set of inputs to a set of outputs.
3.5.2.1.12.1.1. Type
- array
Property | Type | Description |
---|---|---|
detectMultilineErrors | bool | (optional) DetectMultilineErrors enables multiline error detection of container logs |
inputRefs | array |
InputRefs lists the names ( |
labels | object | (optional) Labels applied to log records passing through this pipeline. |
name | string |
(optional) Name is optional, but must be unique in the |
outputRefs | array |
OutputRefs lists the names ( |
parse | string | (optional) Parse enables parsing of log entries into structured logs |
3.5.2.1.13. .spec.pipelines[].inputRefs[]
3.5.2.1.13.1. Description
3.5.2.1.13.1.1. Type
- array
3.5.2.1.14. .spec.pipelines[].labels
3.5.2.1.14.1. Description
3.5.2.1.14.1.1. Type
- object
3.5.2.1.15. .spec.pipelines[].outputRefs[]
3.5.2.1.15.1. Description
3.5.2.1.15.1.1. Type
- array
3.5.2.1.16. .status
3.5.2.1.16.1. Description
ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder
3.5.2.1.16.1.1. Type
- object
Property | Type | Description |
---|---|---|
conditions | object | Conditions of the log forwarder. |
inputs | Conditions | Inputs maps input name to condition of the input. |
outputs | Conditions | Outputs maps output name to condition of the output. |
pipelines | Conditions | Pipelines maps pipeline name to condition of the pipeline. |
3.5.2.1.17. .status.conditions
3.5.2.1.17.1. Description
3.5.2.1.17.1.1. Type
- object
3.5.2.1.18. .status.inputs
3.5.2.1.18.1. Description
3.5.2.1.18.1.1. Type
- Conditions
3.5.2.1.19. .status.outputs
3.5.2.1.19.1. Description
3.5.2.1.19.1.1. Type
- Conditions
3.5.2.1.20. .status.pipelines
3.5.2.1.20.1. Description
3.5.2.1.20.1.1. Type
- Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API
Property | Type | Description |
---|---|---|
spec | object | Specification of the desired behavior of ClusterLogging |
status | object | Status defines the observed state of ClusterLogging |
3.5.2.1.21. .spec
3.5.2.1.21.1. Description
ClusterLoggingSpec defines the desired state of ClusterLogging
3.5.2.1.21.1.1. Type
- object
Property | Type | Description |
---|---|---|
collection | object | Specification of the Collection component for the cluster |
curation | object | (DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster |
forwarder | object | (DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster |
logStore | object | (optional) Specification of the Log Storage component for the cluster |
managementState | string | (optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator |
visualization | object | (optional) Specification of the Visualization component for the cluster |
3.5.2.1.22. .spec.collection
3.5.2.1.22.1. Description
This is the struct that will contain information pertinent to Log and event collection
3.5.2.1.22.1.1. Type
- object
Property | Type | Description |
---|---|---|
resources | object | (optional) The resource requirements for the collector |
nodeSelector | object | (optional) Define which Nodes the Pods are scheduled on. |
tolerations | array | (optional) Define the tolerations the Pods will accept |
fluentd | object | (optional) Fluentd represents the configuration for forwarders of type fluentd. |
logs | object | (DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster |
type | string | (optional) The type of Log Collection to configure |
3.5.2.1.23. .spec.collection.fluentd
3.5.2.1.23.1. Description
FluentdForwarderSpec represents the configuration for forwarders of type fluentd.
3.5.2.1.23.1.1. Type
- object
Property | Type | Description |
---|---|---|
buffer | object | |
inFile | object |
3.5.2.1.24. .spec.collection.fluentd.buffer
3.5.2.1.24.1. Description
FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing.
For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters
For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters
For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters
3.5.2.1.24.1.1. Type
- object
Property | Type | Description |
---|---|---|
chunkLimitSize | string | (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be |
flushInterval | string | (optional) FlushInterval represents the time duration to wait between two consecutive flush |
flushMode | string | (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode |
flushThreadCount | int | (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer |
overflowAction | string | (optional) OverflowAction represents the action for the fluentd buffer plugin to |
retryMaxInterval | string | (optional) RetryMaxInterval represents the maximum time interval for exponential backoff |
retryTimeout | string | (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up |
retryType | string | (optional) RetryType represents the type of retrying flush operations. Flush operations can |
retryWait | string | (optional) RetryWait represents the time duration between two consecutive retries to flush |
totalLimitSize | string | (optional) TotalLimitSize represents the threshold of node space allowed per fluentd |
3.5.2.1.25. .spec.collection.fluentd.inFile
3.5.2.1.25.1. Description
FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs.
For general parameters refer to: https://docs.fluentd.org/input/tail#parameters
3.5.2.1.25.1.1. Type
- object
Property | Type | Description |
---|---|---|
readLinesLimit | int | (optional) ReadLinesLimit represents the number of lines to read with each I/O operation |
3.5.2.1.26. .spec.collection.logs
3.5.2.1.26.1. Description
3.5.2.1.26.1.1. Type
- object
Property | Type | Description |
---|---|---|
fluentd | object | Specification of the Fluentd Log Collection component |
type | string | The type of Log Collection to configure |
3.5.2.1.27. .spec.collection.logs.fluentd
3.5.2.1.27.1. Description
CollectorSpec is spec to define scheduling and resources for a collector
3.5.2.1.27.1.1. Type
- object
Property | Type | Description |
---|---|---|
nodeSelector | object | (optional) Define which Nodes the Pods are scheduled on. |
resources | object | (optional) The resource requirements for the collector |
tolerations | array | (optional) Define the tolerations the Pods will accept |
3.5.2.1.28. .spec.collection.logs.fluentd.nodeSelector
3.5.2.1.28.1. Description
3.5.2.1.28.1.1. Type
- object
3.5.2.1.29. .spec.collection.logs.fluentd.resources
3.5.2.1.29.1. Description
3.5.2.1.29.1.1. Type
- object
Property | Type | Description |
---|---|---|
limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.5.2.1.30. .spec.collection.logs.fluentd.resources.limits
3.5.2.1.30.1. Description
3.5.2.1.30.1.1. Type
- object
3.5.2.1.31. .spec.collection.logs.fluentd.resources.requests
3.5.2.1.31.1. Description
3.5.2.1.31.1.1. Type
- object
3.5.2.1.32. .spec.collection.logs.fluentd.tolerations[]
3.5.2.1.32.1. Description
3.5.2.1.32.1.1. Type
- array
Property | Type | Description |
---|---|---|
effect | string | (optional) Effect indicates the taint effect to match. Empty means match all taint effects. |
key | string | (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. |
operator | string | (optional) Operator represents a key's relationship to the value. |
tolerationSeconds | int | (optional) TolerationSeconds represents the period of time the toleration (which must be |
value | string | (optional) Value is the taint value the toleration matches to. |
3.5.2.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds
3.5.2.1.33.1. Description
3.5.2.1.33.1.1. Type
- int
3.5.2.1.34. .spec.curation
3.5.2.1.34.1. Description
This is the struct that will contain information pertinent to Log curation (Curator)
3.5.2.1.34.1.1. Type
- object
Property | Type | Description |
---|---|---|
curator | object | The specification of curation to configure |
type | string | The kind of curation to configure |
3.5.2.1.35. .spec.curation.curator
3.5.2.1.35.1. Description
3.5.2.1.35.1.1. Type
- object
Property | Type | Description |
---|---|---|
nodeSelector | object | Define which Nodes the Pods are scheduled on. |
resources | object | (optional) The resource requirements for Curator |
schedule | string | The cron schedule that the Curator job is run. Defaults to "30 3 * * *" |
tolerations | array |
3.5.2.1.36. .spec.curation.curator.nodeSelector
3.5.2.1.36.1. Description
3.5.2.1.36.1.1. Type
- object
3.5.2.1.37. .spec.curation.curator.resources
3.5.2.1.37.1. Description
3.5.2.1.37.1.1. Type
- object
Property | Type | Description |
---|---|---|
limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.5.2.1.38. .spec.curation.curator.resources.limits
3.5.2.1.38.1. Description
3.5.2.1.38.1.1. Type
- object
3.5.2.1.39. .spec.curation.curator.resources.requests
3.5.2.1.39.1. Description
3.5.2.1.39.1.1. Type
- object
3.5.2.1.40. .spec.curation.curator.tolerations[]
3.5.2.1.40.1. Description
3.5.2.1.40.1.1. Type
- array
Property | Type | Description |
---|---|---|
effect | string | (optional) Effect indicates the taint effect to match. Empty means match all taint effects. |
key | string | (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. |
operator | string | (optional) Operator represents a key's relationship to the value. |
tolerationSeconds | int | (optional) TolerationSeconds represents the period of time the toleration (which must be |
value | string | (optional) Value is the taint value the toleration matches to. |
3.5.2.1.41. .spec.curation.curator.tolerations[].tolerationSeconds
3.5.2.1.41.1. Description
3.5.2.1.41.1.1. Type
- int
3.5.2.1.42. .spec.forwarder
3.5.2.1.42.1. Description
ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd
.
3.5.2.1.42.1.1. Type
- object
Property | Type | Description |
---|---|---|
fluentd | object |
3.5.2.1.43. .spec.forwarder.fluentd
3.5.2.1.43.1. Description
FluentdForwarderSpec represents the configuration for forwarders of type fluentd.
3.5.2.1.43.1.1. Type
- object
Property | Type | Description |
---|---|---|
buffer | object | |
inFile | object |
3.5.2.1.44. .spec.forwarder.fluentd.buffer
3.5.2.1.44.1. Description
FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing.
For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters
For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters
For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters
3.5.2.1.44.1.1. Type
- object
Property | Type | Description |
---|---|---|
chunkLimitSize | string | (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be |
flushInterval | string | (optional) FlushInterval represents the time duration to wait between two consecutive flush |
flushMode | string | (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode |
flushThreadCount | int | (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer |
overflowAction | string | (optional) OverflowAction represents the action for the fluentd buffer plugin to |
retryMaxInterval | string | (optional) RetryMaxInterval represents the maximum time interval for exponential backoff |
retryTimeout | string | (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up |
retryType | string | (optional) RetryType represents the type of retrying flush operations. Flush operations can |
retryWait | string | (optional) RetryWait represents the time duration between two consecutive retries to flush |
totalLimitSize | string | (optional) TotalLimitSize represents the threshold of node space allowed per fluentd |
3.5.2.1.45. .spec.forwarder.fluentd.inFile
3.5.2.1.45.1. Description
FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs.
For general parameters refer to: https://docs.fluentd.org/input/tail#parameters
3.5.2.1.45.1.1. Type
- object
Property | Type | Description |
---|---|---|
readLinesLimit | int | (optional) ReadLinesLimit represents the number of lines to read with each I/O operation |
3.5.2.1.46. .spec.logStore
3.5.2.1.46.1. Description
The LogStoreSpec contains information about how logs are stored.
3.5.2.1.46.1.1. Type
- object
Property | Type | Description |
---|---|---|
elasticsearch | object | Specification of the Elasticsearch Log Store component |
lokistack | object | LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack. |
retentionPolicy | object | (optional) Retention policy defines the maximum age for an index after which it should be deleted |
type | string | The Type of Log Storage to configure. The operator currently supports either using ElasticSearch |
3.5.2.1.47. .spec.logStore.elasticsearch
3.5.2.1.47.1. Description
3.5.2.1.47.1.1. Type
- object
Property | Type | Description |
---|---|---|
nodeCount | int | Number of nodes to deploy for Elasticsearch |
nodeSelector | object | Define which Nodes the Pods are scheduled on. |
proxy | object | Specification of the Elasticsearch Proxy component |
redundancyPolicy | string | (optional) |
resources | object | (optional) The resource requirements for Elasticsearch |
storage | object | (optional) The storage specification for Elasticsearch data nodes |
tolerations | array |
3.5.2.1.48. .spec.logStore.elasticsearch.nodeSelector
3.5.2.1.48.1. Description
3.5.2.1.48.1.1. Type
- object
3.5.2.1.49. .spec.logStore.elasticsearch.proxy
3.5.2.1.49.1. Description
3.5.2.1.49.1.1. Type
- object
Property | Type | Description |
---|---|---|
resources | object |
3.5.2.1.50. .spec.logStore.elasticsearch.proxy.resources
3.5.2.1.50.1. Description
3.5.2.1.50.1.1. Type
- object
Property | Type | Description |
---|---|---|
limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.5.2.1.51. .spec.logStore.elasticsearch.proxy.resources.limits
3.5.2.1.51.1. Description
3.5.2.1.51.1.1. Type
- object
3.5.2.1.52. .spec.logStore.elasticsearch.proxy.resources.requests
3.5.2.1.52.1. Description
3.5.2.1.52.1.1. Type
- object
3.5.2.1.53. .spec.logStore.elasticsearch.resources
3.5.2.1.53.1. Description
3.5.2.1.53.1.1. Type
- object
Property | Type | Description |
---|---|---|
limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.5.2.1.54. .spec.logStore.elasticsearch.resources.limits
3.5.2.1.54.1. Description
3.5.2.1.54.1.1. Type
- object
3.5.2.1.55. .spec.logStore.elasticsearch.resources.requests
3.5.2.1.55.1. Description
3.5.2.1.55.1.1. Type
- object
3.5.2.1.56. .spec.logStore.elasticsearch.storage
3.5.2.1.56.1. Description
3.5.2.1.56.1.1. Type
- object
Property | Type | Description |
---|---|---|
size | object | The max storage capacity for the node to provision. |
storageClassName | string | (optional) The name of the storage class to use with creating the node's PVC. |
3.5.2.1.57. .spec.logStore.elasticsearch.storage.size
3.5.2.1.57.1. Description
3.5.2.1.57.1.1. Type
- object
Property | Type | Description |
---|---|---|
Format | string | Change Format at will. See the comment for Canonicalize for |
d | object | d is the quantity in inf.Dec form if d.Dec != nil |
i | int | i is the quantity in int64 scaled form, if d.Dec == nil |
s | string | s is the generated value of this quantity to avoid recalculation |
3.5.2.1.58. .spec.logStore.elasticsearch.storage.size.d
3.5.2.1.58.1. Description
3.5.2.1.58.1.1. Type
- object
Property | Type | Description |
---|---|---|
Dec | object |
3.5.2.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec
3.5.2.1.59.1. Description
3.5.2.1.59.1.1. Type
- object
Property | Type | Description |
---|---|---|
scale | int | |
unscaled | object |
3.5.2.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled
3.5.2.1.60.1. Description
3.5.2.1.60.1.1. Type
- object
Property | Type | Description |
---|---|---|
abs | Word | sign |
neg | bool |
3.5.2.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs
3.5.2.1.61.1. Description
3.5.2.1.61.1.1. Type
- Word
3.5.2.1.62. .spec.logStore.elasticsearch.storage.size.i
3.5.2.1.62.1. Description
3.5.2.1.62.1.1. Type
- int
Property | Type | Description |
---|---|---|
scale | int | |
value | int |
3.5.2.1.63. .spec.logStore.elasticsearch.tolerations[]
3.5.2.1.63.1. Description
3.5.2.1.63.1.1. Type
- array
Property | Type | Description |
---|---|---|
effect | string | (optional) Effect indicates the taint effect to match. Empty means match all taint effects. |
key | string | (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. |
operator | string | (optional) Operator represents a key's relationship to the value. |
tolerationSeconds | int | (optional) TolerationSeconds represents the period of time the toleration (which must be |
value | string | (optional) Value is the taint value the toleration matches to. |
3.5.2.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds
3.5.2.1.64.1. Description
3.5.2.1.64.1.1. Type
- int
3.5.2.1.65. .spec.logStore.lokistack
3.5.2.1.65.1. Description
LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace.
3.5.2.1.65.1.1. Type
- object
Property | Type | Description |
---|---|---|
name | string | Name of the LokiStack resource. |
3.5.2.1.66. .spec.logStore.retentionPolicy
3.5.2.1.66.1. Description
3.5.2.1.66.1.1. Type
- object
Property | Type | Description |
---|---|---|
application | object | |
audit | object | |
infra | object |
3.5.2.1.67. .spec.logStore.retentionPolicy.application
3.5.2.1.67.1. Description
3.5.2.1.67.1.1. Type
- object
Property | Type | Description |
---|---|---|
diskThresholdPercent | int | (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) |
maxAge | string | (optional) |
namespaceSpec | array | (optional) The per namespace specification to delete documents older than a given minimum age |
pruneNamespacesInterval | string | (optional) How often to run a new prune-namespaces job |
3.5.2.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[]
3.5.2.1.68.1. Description
3.5.2.1.68.1.1. Type
- array
Property | Type | Description |
---|---|---|
minAge | string | (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) |
namespace | string | Target Namespace to delete logs older than MinAge (defaults to 7d) |
3.5.2.1.69. .spec.logStore.retentionPolicy.audit
3.5.2.1.69.1. Description
3.5.2.1.69.1.1. Type
- object
Property | Type | Description |
---|---|---|
diskThresholdPercent | int | (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) |
maxAge | string | (optional) |
namespaceSpec | array | (optional) The per namespace specification to delete documents older than a given minimum age |
pruneNamespacesInterval | string | (optional) How often to run a new prune-namespaces job |
3.5.2.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[]
3.5.2.1.70.1. Description
3.5.2.1.70.1.1. Type
- array
Property | Type | Description |
---|---|---|
minAge | string | (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) |
namespace | string | Target Namespace to delete logs older than MinAge (defaults to 7d) |
3.5.2.1.71. .spec.logStore.retentionPolicy.infra
3.5.2.1.71.1. Description
3.5.2.1.71.1.1. Type
- object
Property | Type | Description |
---|---|---|
diskThresholdPercent | int | (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) |
maxAge | string | (optional) |
namespaceSpec | array | (optional) The per namespace specification to delete documents older than a given minimum age |
pruneNamespacesInterval | string | (optional) How often to run a new prune-namespaces job |
3.5.2.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[]
3.5.2.1.72.1. Description
3.5.2.1.72.1.1. Type
- array
Property | Type | Description |
---|---|---|
minAge | string | (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) |
namespace | string | Target Namespace to delete logs older than MinAge (defaults to 7d) |
3.5.2.1.73. .spec.visualization
3.5.2.1.73.1. Description
This is the struct that will contain information pertinent to Log visualization (Kibana)
3.5.2.1.73.1.1. Type
- object
Property | Type | Description |
---|---|---|
kibana | object | Specification of the Kibana Visualization component |
type | string | The type of Visualization to configure |
3.5.2.1.74. .spec.visualization.kibana
3.5.2.1.74.1. Description
3.5.2.1.74.1.1. Type
- object
Property | Type | Description |
---|---|---|
nodeSelector | object | Define which Nodes the Pods are scheduled on. |
proxy | object | Specification of the Kibana Proxy component |
replicas | int | Number of instances to deploy for a Kibana deployment |
resources | object | (optional) The resource requirements for Kibana |
tolerations | array |
3.5.2.1.75. .spec.visualization.kibana.nodeSelector
3.5.2.1.75.1. Description
3.5.2.1.75.1.1. Type
- object
3.5.2.1.76. .spec.visualization.kibana.proxy
3.5.2.1.76.1. Description
3.5.2.1.76.1.1. Type
- object
Property | Type | Description |
---|---|---|
resources | object |
3.5.2.1.77. .spec.visualization.kibana.proxy.resources
3.5.2.1.77.1. Description
3.5.2.1.77.1.1. Type
- object
Property | Type | Description |
---|---|---|
limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.5.2.1.78. .spec.visualization.kibana.proxy.resources.limits
3.5.2.1.78.1. Description
3.5.2.1.78.1.1. Type
- object
3.5.2.1.79. .spec.visualization.kibana.proxy.resources.requests
3.5.2.1.79.1. Description
3.5.2.1.79.1.1. Type
- object
3.5.2.1.80. .spec.visualization.kibana.replicas
3.5.2.1.80.1. Description
3.5.2.1.80.1.1. Type
- int
3.5.2.1.81. .spec.visualization.kibana.resources
3.5.2.1.81.1. Description
3.5.2.1.81.1.1. Type
- object
Property | Type | Description |
---|---|---|
limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.5.2.1.82. .spec.visualization.kibana.resources.limits
3.5.2.1.82.1. Description
3.5.2.1.82.1.1. Type
- object
3.5.2.1.83. .spec.visualization.kibana.resources.requests
3.5.2.1.83.1. Description
3.5.2.1.83.1.1. Type
- object
3.5.2.1.84. .spec.visualization.kibana.tolerations[]
3.5.2.1.84.1. Description
3.5.2.1.84.1.1. Type
- array
Property | Type | Description |
---|---|---|
effect | string | (optional) Effect indicates the taint effect to match. Empty means match all taint effects. |
key | string | (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. |
operator | string | (optional) Operator represents a key's relationship to the value. |
tolerationSeconds | int | (optional) TolerationSeconds represents the period of time the toleration (which must be |
value | string | (optional) Value is the taint value the toleration matches to. |
3.5.2.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds
3.5.2.1.85.1. Description
3.5.2.1.85.1.1. Type
- int
3.5.2.1.86. .status
3.5.2.1.86.1. Description
ClusterLoggingStatus defines the observed state of ClusterLogging
3.5.2.1.86.1.1. Type
- object
Property | Type | Description |
---|---|---|
collection | object | (optional) |
conditions | object | (optional) |
curation | object | (optional) |
logStore | object | (optional) |
visualization | object | (optional) |
3.5.2.1.87. .status.collection
3.5.2.1.87.1. Description
3.5.2.1.87.1.1. Type
- object
Property | Type | Description |
---|---|---|
logs | object | (optional) |
3.5.2.1.88. .status.collection.logs
3.5.2.1.88.1. Description
3.5.2.1.88.1.1. Type
- object
Property | Type | Description |
---|---|---|
fluentdStatus | object | (optional) |
3.5.2.1.89. .status.collection.logs.fluentdStatus
3.5.2.1.89.1. Description
3.5.2.1.89.1.1. Type
- object
Property | Type | Description |
---|---|---|
clusterCondition | object | (optional) |
daemonSet | string | (optional) |
nodes | object | (optional) |
pods | string | (optional) |
3.5.2.1.90. .status.collection.logs.fluentdStatus.clusterCondition
3.5.2.1.90.1. Description
operator-sdk generate crds
does not allow map-of-slice, must use a named type.
3.5.2.1.90.1.1. Type
- object
3.5.2.1.91. .status.collection.logs.fluentdStatus.nodes
3.5.2.1.91.1. Description
3.5.2.1.91.1.1. Type
- object
3.5.2.1.92. .status.conditions
3.5.2.1.92.1. Description
3.5.2.1.92.1.1. Type
- object
3.5.2.1.93. .status.curation
3.5.2.1.93.1. Description
3.5.2.1.93.1.1. Type
- object
Property | Type | Description |
---|---|---|
curatorStatus | array | (optional) |
3.5.2.1.94. .status.curation.curatorStatus[]
3.5.2.1.94.1. Description
3.5.2.1.94.1.1. Type
- array
Property | Type | Description |
---|---|---|
clusterCondition | object | (optional) |
cronJobs | string | (optional) |
schedules | string | (optional) |
suspended | bool | (optional) |
3.5.2.1.95. .status.curation.curatorStatus[].clusterCondition
3.5.2.1.95.1. Description
operator-sdk generate crds
does not allow map-of-slice, must use a named type.
3.5.2.1.95.1.1. Type
- object
3.5.2.1.96. .status.logStore
3.5.2.1.96.1. Description
3.5.2.1.96.1.1. Type
- object
Property | Type | Description |
---|---|---|
elasticsearchStatus | array | (optional) |
3.5.2.1.97. .status.logStore.elasticsearchStatus[]
3.5.2.1.97.1. Description
3.5.2.1.97.1.1. Type
- array
Property | Type | Description |
---|---|---|
cluster | object | (optional) |
clusterConditions | object | (optional) |
clusterHealth | string | (optional) |
clusterName | string | (optional) |
deployments | array | (optional) |
nodeConditions | object | (optional) |
nodeCount | int | (optional) |
pods | object | (optional) |
replicaSets | array | (optional) |
shardAllocationEnabled | string | (optional) |
statefulSets | array | (optional) |
3.5.2.1.98. .status.logStore.elasticsearchStatus[].cluster
3.5.2.1.98.1. Description
3.5.2.1.98.1.1. Type
- object
Property | Type | Description |
---|---|---|
activePrimaryShards | int | The number of Active Primary Shards for the Elasticsearch Cluster |
activeShards | int | The number of Active Shards for the Elasticsearch Cluster |
initializingShards | int | The number of Initializing Shards for the Elasticsearch Cluster |
numDataNodes | int | The number of Data Nodes for the Elasticsearch Cluster |
numNodes | int | The number of Nodes for the Elasticsearch Cluster |
pendingTasks | int | |
relocatingShards | int | The number of Relocating Shards for the Elasticsearch Cluster |
status | string | The current Status of the Elasticsearch Cluster |
unassignedShards | int | The number of Unassigned Shards for the Elasticsearch Cluster |
3.5.2.1.99. .status.logStore.elasticsearchStatus[].clusterConditions
3.5.2.1.99.1. Description
3.5.2.1.99.1.1. Type
- object
3.5.2.1.100. .status.logStore.elasticsearchStatus[].deployments[]
3.5.2.1.100.1. Description
3.5.2.1.100.1.1. Type
- array
3.5.2.1.101. .status.logStore.elasticsearchStatus[].nodeConditions
3.5.2.1.101.1. Description
3.5.2.1.101.1.1. Type
- object
3.5.2.1.102. .status.logStore.elasticsearchStatus[].pods
3.5.2.1.102.1. Description
3.5.2.1.102.1.1. Type
- object
3.5.2.1.103. .status.logStore.elasticsearchStatus[].replicaSets[]
3.5.2.1.103.1. Description
3.5.2.1.103.1.1. Type
- array
3.5.2.1.104. .status.logStore.elasticsearchStatus[].statefulSets[]
3.5.2.1.104.1. Description
3.5.2.1.104.1.1. Type
- array
3.5.2.1.105. .status.visualization
3.5.2.1.105.1. Description
3.5.2.1.105.1.1. Type
- object
Property | Type | Description |
---|---|---|
kibanaStatus | array | (optional) |
3.5.2.1.106. .status.visualization.kibanaStatus[]
3.5.2.1.106.1. Description
3.5.2.1.106.1.1. Type
- array
Property | Type | Description |
---|---|---|
clusterCondition | object | (optional) |
deployment | string | (optional) |
pods | string | (optional) The status for each of the Kibana pods for the Visualization component |
replicaSets | array | (optional) |
replicas | int | (optional) |
3.5.2.1.107. .status.visualization.kibanaStatus[].clusterCondition
3.5.2.1.107.1. Description
3.5.2.1.107.1.1. Type
- object
3.5.2.1.108. .status.visualization.kibanaStatus[].replicaSets[]
3.5.2.1.108.1. Description
3.5.2.1.108.1.1. Type
- array