Este contenido no está disponible en el idioma seleccionado.
Chapter 8. Enabling JSON logging
You can configure the Log Forwarding API to parse JSON strings into a structured object.
8.1. Parsing JSON logs Copiar enlaceEnlace copiado en el portapapeles!
Logs including JSON logs are usually represented as a string inside the
message
To illustrate how this works, suppose that you have the following structured JSON log entry.
Example structured JSON log entry
{"level":"info","name":"fred","home":"bedrock"}
Normally, the
ClusterLogForwarder
message
message
Example message field
{"message":"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"",
"more fields..."}
To enable parsing JSON log, you add
parse: json
ClusterLogForwarder
Example snippet showing parse: json
pipelines:
- inputRefs: [ application ]
outputRefs: myFluentd
parse: json
When you enable parsing JSON logs by using
parse: json
structured
message
Example structured output containing the structured JSON log entry
{"structured": { "level": "info", "name": "fred", "home": "bedrock" },
"more fields..."}
If the log entry does not contain valid structured JSON, the
structured
To enable parsing JSON logs for specific logging platforms, see Forwarding logs to third-party systems.
8.2. Configuring JSON log data for Elasticsearch Copiar enlaceEnlace copiado en el portapapeles!
If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the
ClusterLogForwarder
If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
Structure types
You can use the following structure types in the
ClusterLogForwarder
- (string, optional) is the name of a message field. The value of that field, if present, is used to construct the index name.
structuredTypeKey-
is the Kubernetes pod label whose value is used to construct the index name.
kubernetes.labels.<key> -
is the
openshift.labels.<key>element in thepipeline.label.<key>CR whose value is used to construct the index name.ClusterLogForwarder -
uses the container name to construct the index name.
kubernetes.container_name
-
-
: (string, optional) If
structuredTypeNameis not set or its key is not present, OpenShift Logging uses the value ofstructuredTypeKeyas the structured type. When you use bothstructuredTypeNameandstructuredTypeKeytogether,structuredTypeNameprovides a fallback index name if the key instructuredTypeNameis missing from the JSON log data.structuredTypeKey
Although you can set the value of
structuredTypeKey
A structuredTypeKey: kubernetes.labels.<key> example
Suppose the following:
- Your cluster is running application pods that produce JSON logs in two different formats, "apache" and "google".
-
The user labels these application pods with and
logFormat=apache.logFormat=google -
You use the following snippet in your CR YAML file.
ClusterLogForwarder
outputDefaults:
elasticsearch:
structuredTypeKey: kubernetes.labels.logFormat
structuredTypeName: nologformat
pipelines:
- inputRefs: <application>
outputRefs: default
parse: json
In that case, the following structured log record goes to the
app-apache-write
{
"structured":{"name":"fred","home":"bedrock"},
"kubernetes":{"labels":{"logFormat": "apache", ...}}
}
And the following structured log record goes to the
app-google-write
{
"structured":{"name":"wilma","home":"bedrock"},
"kubernetes":{"labels":{"logFormat": "google", ...}}
}
A structuredTypeKey: openshift.labels.<key> example
Suppose that you use the following snippet in your
ClusterLogForwarder
outputDefaults:
elasticsearch:
structuredTypeKey: openshift.labels.myLabel
structuredTypeName: nologformat
pipelines:
- name: application-logs
inputRefs:
- application
- audit
outputRefs:
- elasticsearch-secure
- default
parse: json
labels:
myLabel: myValue
In that case, the following structured log record goes to the
app-myValue-write
{
"structured":{"name":"fred","home":"bedrock"},
"openshift":{"labels":{"myLabel": "myValue", ...}}
}
Additional considerations
- The Elasticsearch index for structured records is formed by prepending "app-" to the structured type and appending "-write".
- Unstructured records are not sent to the structured index. They are indexed as usual in the application, infrastructure, or audit indices.
-
If there is no non-empty structured type, forward an unstructured record with no field.
structured
It is important not to overload Elasticsearch with too many indices. Only use distinct structured types for distinct log formats, not for each application or namespace. For example, most Apache applications use the same JSON log format and structured type, such as
LogApache
8.3. Forwarding JSON logs to the Elasticsearch log store Copiar enlaceEnlace copiado en el portapapeles!
For an Elasticsearch log store, if your JSON log entries follow different schemas, configure the
ClusterLogForwarder
Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store.
To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
Procedure
Add the following snippet to your
CR YAML file.ClusterLogForwarderoutputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json-
Optional: Use to specify one of the log record fields, as described in the preceding topic, Configuring JSON log data for Elasticsearch. Otherwise, remove this line.
structuredTypeKey Optional: Use
to specify astructuredTypeName, as described in the preceding topic, Configuring JSON log data for Elasticsearch. Otherwise, remove this line.<name>ImportantTo parse JSON logs, you must set either
orstructuredTypeKey, or bothstructuredTypeNameandstructuredTypeKey.structuredTypeName-
For , specify which log types to forward by using that pipeline, such as
inputRefsapplication,, orinfrastructure.audit -
Add the element to pipelines.
parse: json Create the CR object:
$ oc create -f <file-name>.yamlThe Red Hat OpenShift Logging Operator redeploys the Fluentd pods. However, if they do not redeploy, delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=collector