Chapter 1. Installing Logging


OpenShift Container Platform Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs.

To get started with logging, you must install the following Operators:

  • Loki Operator to manage your log store.
  • Red Hat OpenShift Logging Operator to manage log collection and forwarding.
  • Cluster Observability Operator (COO) to manage visualization.

You can use either the OpenShift Container Platform web console or the OpenShift Container Platform CLI to install or configure logging.

Important

You must configure the Red Hat OpenShift Logging Operator after the Loki Operator.

1.1. Prerequisites

  • If you are using OKD, you have downloaded the pull secret from Red Hat OpenShift Cluster Manager as shown in "Obtaining the installation program" in the installation documentation for your platform.

    If you have the pull secret, add the redhat-operators catalog to the OperatorHub custom resource (CR) as shown in "Configuring OpenShift Container Platform to use Red Hat Operators".

1.2. Installation by using the CLI

The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the CLI.

Install Loki Operator on your OpenShift Container Platform cluster to manage the log store Loki by using the OpenShift Container Platform command-line interface (CLI). You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator.

Prerequisites

  • You have administrator permissions.
  • You installed the OpenShift CLI (oc).
  • You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.

Procedure

  1. Create a Namespace object for Loki Operator:

    Example Namespace object

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-operators-redhat 
    1
    
      labels:
        openshift.io/cluster-monitoring: "true" 
    2
    Copy to Clipboard Toggle word wrap

    1
    You must specify openshift-operators-redhat as the namespace. To enable monitoring for the operator, configure Cluster Monitoring Operator to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community operators, which are untrusted and could publish a metric with the same name as an OpenShift Container Platform metric, causing conflicts.
    2
    A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace.
  2. Apply the Namespace object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  3. Create an OperatorGroup object.

    Example OperatorGroup object

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: loki-operator
      namespace: openshift-operators-redhat 
    1
    
    spec:
      upgradeStrategy: Default
    Copy to Clipboard Toggle word wrap

    1
    You must specify openshift-operators-redhat as the namespace.
  4. Apply the OperatorGroup object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  5. Create a Subscription object for Loki Operator:

    Example Subscription object

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: loki-operator
      namespace: openshift-operators-redhat 
    1
    
    spec:
      channel: stable-6.<y> 
    2
    
      installPlanApproval: Automatic 
    3
    
      name: loki-operator
      source: redhat-operators 
    4
    
      sourceNamespace: openshift-marketplace
    Copy to Clipboard Toggle word wrap

    1
    You must specify openshift-operators-redhat as the namespace.
    2
    Specify stable-6.<y> as the channel.
    3
    If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
    4
    Specify redhat-operators as the value. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM).
  6. Apply the Subscription object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  7. Create a namespace object for deploy the LokiStack:

    Example namespace object

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-logging 
    1
    
      labels:
        openshift.io/cluster-monitoring: "true" 
    2
    Copy to Clipboard Toggle word wrap

    1
    The openshift-logging namespace is dedicated for all logging workloads.
    2
    A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace.
  8. Apply the namespace object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  9. Create a secret with the credentials to access the object storage. For example, create a secret to access Amazon Web Services (AWS) s3.

    Example Secret object

    apiVersion: v1
    kind: Secret
    metadata:
      name: logging-loki-s3 
    1
    
      namespace: openshift-logging
    stringData: 
    2
    
      access_key_id: <access_key_id>
      access_key_secret: <access_secret>
      bucketnames: s3-bucket-name
      endpoint: https://s3.eu-central-1.amazonaws.com
      region: eu-central-1
    Copy to Clipboard Toggle word wrap

    1
    Use the name logging-loki-s3 to match the name used in LokiStack.
    2
    For the contents of the secret see the Loki object storage section.
    Important

    If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage.

  10. Apply the Secret object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  11. Create a LokiStack CR:

    Example LokiStack CR

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    metadata:
      name: logging-loki 
    1
    
      namespace: openshift-logging 
    2
    
    spec:
      size: 1x.small 
    3
    
      storage:
        schemas:
        - version: v13
          effectiveDate: "<yyyy>-<mm>-<dd>" 
    4
    
        secret:
          name: logging-loki-s3 
    5
    
          type: s3 
    6
    
      storageClassName: <storage_class_name> 
    7
    
      tenants:
        mode: openshift-logging 
    8
    Copy to Clipboard Toggle word wrap

    1
    Use the name logging-loki.
    2
    You must specify openshift-logging as the namespace.
    3
    Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1.
    4
    For new installations this date should be set to the equivalent of "yesterday", as this will be the date from when the schema takes effect.
    5
    Specify the name of your log store secret.
    6
    Specify the corresponding storage type.
    7
    Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command.
    8
    The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams.
  12. Apply the LokiStack CR object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the installation by running the following command:

    $ oc get pods -n openshift-logging
    Copy to Clipboard Toggle word wrap

    Example output

    $ oc get pods -n openshift-logging
    NAME                                               READY   STATUS    RESTARTS   AGE
    logging-loki-compactor-0                           1/1     Running   0          42m
    logging-loki-distributor-7d7688bcb9-dvcj8          1/1     Running   0          42m
    logging-loki-gateway-5f6c75f879-bl7k9              2/2     Running   0          42m
    logging-loki-gateway-5f6c75f879-xhq98              2/2     Running   0          42m
    logging-loki-index-gateway-0                       1/1     Running   0          42m
    logging-loki-ingester-0                            1/1     Running   0          42m
    logging-loki-querier-6b7b56bccc-2v9q4              1/1     Running   0          42m
    logging-loki-query-frontend-84fb57c578-gq2f7       1/1     Running   0          42m
    Copy to Clipboard Toggle word wrap

Install Red Hat OpenShift Logging Operator on your OpenShift Container Platform cluster to collect and forward logs to a log store by using the OpenShift CLI (oc).

Prerequisites

  • You have administrator permissions.
  • You installed the OpenShift CLI (oc).
  • You installed and configured Loki Operator.
  • You have created the openshift-logging namespace.

Procedure

  1. Create an OperatorGroup object:

    Example OperatorGroup object

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: cluster-logging
      namespace: openshift-logging 
    1
    
    spec:
      upgradeStrategy: Default
    Copy to Clipboard Toggle word wrap

    1
    You must specify openshift-logging as the namespace.
  2. Apply the OperatorGroup object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  3. Create a Subscription object for Red Hat OpenShift Logging Operator:

    Example Subscription object

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: cluster-logging
      namespace: openshift-logging 
    1
    
    spec:
      channel: stable-6.<y> 
    2
    
      installPlanApproval: Automatic 
    3
    
      name: cluster-logging
      source: redhat-operators 
    4
    
      sourceNamespace: openshift-marketplace
    Copy to Clipboard Toggle word wrap

    1
    You must specify openshift-logging as the namespace.
    2
    Specify stable-6.<y> as the channel.
    3
    If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
    4
    Specify redhat-operators as the value. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM).
  4. Apply the Subscription object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  5. Create a service account to be used by the log collector:

    $ oc create sa logging-collector -n openshift-logging
    Copy to Clipboard Toggle word wrap
  6. Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. In this example, the collector is provided permissions to collect logs from both infrastructure and application logs.

    $ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
    $ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
    $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging
    Copy to Clipboard Toggle word wrap
  7. Create a ClusterLogForwarder CR:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging 
    1
    
    spec:
      serviceAccount:
        name: logging-collector 
    2
    
      outputs:
      - name: lokistack-out
        type: lokiStack 
    3
    
        lokiStack:
          target: 
    4
    
            name: logging-loki
            namespace: openshift-logging
          authentication:
            token:
              from: serviceAccount
        tls:
          ca:
            key: service-ca.crt
            configMapName: openshift-service-ca.crt
      pipelines:
      - name: infra-app-logs
        inputRefs: 
    5
    
        - application
        - infrastructure
        outputRefs:
        - lokistack-out
    Copy to Clipboard Toggle word wrap

    1
    You must specify the openshift-logging namespace.
    2
    Specify the name of the service account created before.
    3
    Select the lokiStack output type to send logs to the LokiStack instance.
    4
    Point the ClusterLogForwarder to the LokiStack instance created earlier.
    5
    Select the log output types you want to send to the LokiStack instance.
  8. Apply the ClusterLogForwarder CR object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify the installation by running the following command:

    $ oc get pods -n openshift-logging
    Copy to Clipboard Toggle word wrap

    Example output

    $ oc get pods -n openshift-logging
    NAME                                               READY   STATUS    RESTARTS   AGE
    cluster-logging-operator-fb7f7cf69-8jsbq           1/1     Running   0          98m
    instance-222js                                     2/2     Running   0          18m
    instance-g9ddv                                     2/2     Running   0          18m
    instance-hfqq8                                     2/2     Running   0          18m
    instance-sphwg                                     2/2     Running   0          18m
    instance-vv7zn                                     2/2     Running   0          18m
    instance-wk5zz                                     2/2     Running   0          18m
    logging-loki-compactor-0                           1/1     Running   0          42m
    logging-loki-distributor-7d7688bcb9-dvcj8          1/1     Running   0          42m
    logging-loki-gateway-5f6c75f879-bl7k9              2/2     Running   0          42m
    logging-loki-gateway-5f6c75f879-xhq98              2/2     Running   0          42m
    logging-loki-index-gateway-0                       1/1     Running   0          42m
    logging-loki-ingester-0                            1/1     Running   0          42m
    logging-loki-querier-6b7b56bccc-2v9q4              1/1     Running   0          42m
    logging-loki-query-frontend-84fb57c578-gq2f7       1/1     Running   0          42m
    Copy to Clipboard Toggle word wrap

Install the logging UI plugin by using the command-line interface (CLI) so that you can visualize logs.

Prerequisites

  • You have administrator permissions.
  • You installed the OpenShift CLI (oc).
  • You installed and configured Loki Operator.

Procedure

  1. Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator.
  2. Create a UIPlugin custom resource (CR):

    Example UIPlugin CR

    apiVersion: observability.openshift.io/v1alpha1
    kind: UIPlugin
    metadata:
      name: logging  
    1
    
    spec:
      type: Logging  
    2
    
      logging:
        lokiStack:
          name: logging-loki  
    3
    
        logsLimit: 50
        timeout: 30s
        schema: otel 
    4
    Copy to Clipboard Toggle word wrap

    1
    Set name to logging.
    2
    Set type to Logging.
    3
    The name value must match the name of your LokiStack instance. If you did not install LokiStack in the openshift-logging namespace, set the LokiStack namespace under the lokiStack configuration.
    4
    schema is one of otel, viaq, or select. The default is viaq if no value is specified. When you choose select, you can select the mode in the UI when you run a query.
    Note

    These are the known issues for the logging UI plugin - for more information, see OU-587.

    • The schema feature is only supported in Red Hat OpenShift Logging 4.15 and later. In earlier versions of Red Hat OpenShift Logging, the logging UI plugin will only use the viaq attribute, ignoring any other values that might be set.
    • Non-administrator users cannot query logs using the otel attribute with logging for Red Hat OpenShift versions 5.8 to 6.2. This issue will be fixed in a future logging release. (LOG-6589)
    • In logging for Red Hat OpenShift version 5.9, the severity_text Otel attribute is not set.
  3. Apply the UIPlugin CR object by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

Verification

  1. Access the Red Hat OpenShift Logging web console, and refresh the page if a pop-up message instructs you to do so.
  2. Navigate to the Observe Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod.

1.3. Installation by using the web console

The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the web console.

Install Loki Operator on your OpenShift Container Platform cluster to manage the log store Loki from the OperatorHub by using the OpenShift Container Platform web console. You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator.

Prerequisites

  • You have administrator permissions.
  • You have access to the OpenShift Container Platform web console.
  • You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation).

Procedure

  1. In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub.
  2. Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install.

    Important

    The Community Loki Operator is not supported by Red Hat.

  3. Select stable-x.y as the Update channel.

    The Loki Operator must be deployed to the global Operator group namespace openshift-operators-redhat, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you.

  4. Select Enable Operator-recommended cluster monitoring on this namespace.

    This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace.

  5. For Update approval select Automatic, then click Install.

    If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.

    Note

    An Operator might display a Failed status before the installation completes. If the Operator install completes with an InstallSucceeded message, refresh the page.

  6. While the Operator installs, create the namespace to which the log store will be deployed.

    1. Click + in the top right of the screen to access the Import YAML page.
    2. Add the YAML definition for the openshift-logging namespace:

      Example namespace object

      apiVersion: v1
      kind: Namespace
      metadata:
        name: openshift-logging 
      1
      
        labels:
          openshift.io/cluster-monitoring: "true" 
      2
      Copy to Clipboard Toggle word wrap

      1
      The openshift-logging namespace is dedicated for all logging workloads.
      2
      A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace.
    3. Click Create.
  7. Create a secret with the credentials to access the object storage.

    1. Click + in the top right of the screen to access the Import YAML page.
    2. Add the YAML definition for the secret. For example, create a secret to access Amazon Web Services (AWS) s3:

      Example Secret object

      apiVersion: v1
      kind: Secret
      metadata:
        name: logging-loki-s3 
      1
      
        namespace: openshift-logging 
      2
      
      stringData: 
      3
      
        access_key_id: <access_key_id>
        access_key_secret: <access_key>
        bucketnames: s3-bucket-name
        endpoint: https://s3.eu-central-1.amazonaws.com
        region: eu-central-1
      Copy to Clipboard Toggle word wrap

      1
      Note down the name used for the secret logging-loki-s3 to use it later when creating the LokiStack resource.
      2
      Set the namespace to openshift-logging as that will be the namespace used to deploy LokiStack.
      3
      For the contents of the secret see the Loki object storage section.
      Important

      If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage.

    3. Click Create.
  8. Navigate to the Installed Operators page. Select the Loki Operator under the Provided APIs find the LokiStack resource and click Create Instance.
  9. Select YAML view, and then use the following template to create a LokiStack CR:

    Example LokiStack CR

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    metadata:
      name: logging-loki 
    1
    
      namespace: openshift-logging 
    2
    
    spec:
      size: 1x.small 
    3
    
      storage:
        schemas:
        - version: v13
          effectiveDate: "<yyyy>-<mm>-<dd>"
        secret:
          name: logging-loki-s3 
    4
    
          type: s3 
    5
    
      storageClassName: <storage_class_name> 
    6
    
      tenants:
        mode: openshift-logging 
    7
    Copy to Clipboard Toggle word wrap

    1
    Use the name logging-loki.
    2
    You must specify openshift-logging as the namespace.
    3
    Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium. Additionally, 1x.pico is supported starting with logging 6.1.
    4
    Specify the name of your log store secret.
    5
    Specify the corresponding storage type.
    6
    Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command.
    7
    The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams.
  10. Click Create.

Verification

  1. In the LokiStack tab veriy that you see your LokiStack instance.
  2. In the Status column, verify that you see the message Condition: Ready with a green checkmark.

Install Red Hat OpenShift Logging Operator on your OpenShift Container Platform cluster to collect and forward logs to a log store from the OperatorHub by using the OpenShift Container Platform web console.

Prerequisites

  • You have administrator permissions.
  • You have access to the OpenShift Container Platform web console.
  • You installed and configured Loki Operator.

Procedure

  1. In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub.
  2. Type Red Hat OpenShift Logging Operator in the Filter by keyword field. Click Red Hat OpenShift Logging Operator in the list of available Operators, and then click Install.
  3. Select stable-x.y as the Update channel. The latest version is already selected in the Version field.

    The Red Hat OpenShift Logging Operator must be deployed to the logging namespace openshift-logging, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you.

  4. Select Enable Operator-recommended cluster monitoring on this namespace.

    This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace.

  5. For Update approval select Automatic, then click Install.

    If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.

    Note

    An Operator might display a Failed status before the installation completes. If the operator installation completes with an InstallSucceeded message, refresh the page.

  6. While the operator installs, create the service account that will be used by the log collector to collect the logs.

    1. Click the + in the top right of the screen to access the Import YAML page.
    2. Enter the YAML definition for the service account.

      Example ServiceAccount object

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: logging-collector 
      1
      
        namespace: openshift-logging 
      2
      Copy to Clipboard Toggle word wrap

      1
      Note down the name used for the service account logging-collector to use it later when creating the ClusterLogForwarder resource.
      2
      Set the namespace to openshift-logging because that is the namespace for deploying the ClusterLogForwarder resource.
    3. Click the Create button.
  7. Create the ClusterRoleBinding objects to grant the necessary permissions to the log collector for accessing the logs that you want to collect and to write the log store, for example infrastructure and application logs.

    1. Click the + in the top right of the screen to access the Import YAML page.
    2. Enter the YAML definition for the ClusterRoleBinding resources.

      Example ClusterRoleBinding resources

      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: logging-collector:write-logs
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: logging-collector-logs-writer 
      1
      
      subjects:
      - kind: ServiceAccount
        name: logging-collector
        namespace: openshift-logging
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: logging-collector:collect-application
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: collect-application-logs 
      2
      
      subjects:
      - kind: ServiceAccount
        name: logging-collector
        namespace: openshift-logging
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: logging-collector:collect-infrastructure
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: collect-infrastructure-logs 
      3
      
      subjects:
      - kind: ServiceAccount
        name: logging-collector
        namespace: openshift-logging
      Copy to Clipboard Toggle word wrap

      1
      The cluster role to allow the log collector to write logs to LokiStack.
      2
      The cluster role to allow the log collector to collect logs from applications.
      3
      The cluster role to allow the log collector to collect logs from infrastructure.
    3. Click the Create button.
  8. Go to the Operators Installed Operators page. Select the operator and click the All instances tab.
  9. After granting the necessary permissions to the service account, navigate to the Installed Operators page. Select the Red Hat OpenShift Logging Operator under the Provided APIs, find the ClusterLogForwarder resource and click Create Instance.
  10. Select YAML view, and then use the following template to create a ClusterLogForwarder CR:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging 
    1
    
    spec:
      serviceAccount:
        name: logging-collector 
    2
    
      outputs:
      - name: lokistack-out
        type: lokiStack 
    3
    
        lokiStack:
          target: 
    4
    
            name: logging-loki
            namespace: openshift-logging
          authentication:
            token:
              from: serviceAccount
        tls:
          ca:
            key: service-ca.crt
            configMapName: openshift-service-ca.crt
      pipelines:
      - name: infra-app-logs
        inputRefs: 
    5
    
        - application
        - infrastructure
        outputRefs:
        - lokistack-out
    Copy to Clipboard Toggle word wrap

    1
    You must specify openshift-logging as the namespace.
    2
    Specify the name of the service account created earlier.
    3
    Select the lokiStack output type to send logs to the LokiStack instance.
    4
    Point the ClusterLogForwarder to the LokiStack instance created earlier.
    5
    Select the log output types you want to send to the LokiStack instance.
  11. Click Create.

Verification

  1. In the ClusterLogForwarder tab verify that you see your ClusterLogForwarder instance.
  2. In the Status column, verify that you see the messages:

    • Condition: observability.openshift.io/Authorized
    • observability.openshift.io/Valid, Ready

Install the logging UI plugin by using the web console so that you can visualize logs.

Prerequisites

  • You have administrator permissions.
  • You have access to the Red Hat OpenShift Logging web console.
  • You installed and configured Loki Operator.

Procedure

  1. Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator.
  2. Navigate to the Installed Operators page. Under Provided APIs, select ClusterObservabilityOperator. Find the UIPlugin resource and click Create Instance.
  3. Select the YAML view, and then use the following template to create a UIPlugin custom resource (CR):

    Example UIPlugin CR

    apiVersion: observability.openshift.io/v1alpha1
    kind: UIPlugin
    metadata:
      name: logging  
    1
    
    spec:
      type: Logging  
    2
    
      logging:
        lokiStack:
          name: logging-loki  
    3
    
        logsLimit: 50
        timeout: 30s
        schema: otel 
    4
    Copy to Clipboard Toggle word wrap

    1
    Set name to logging.
    2
    Set type to Logging.
    3
    The name value must match the name of your LokiStack instance. If you did not install LokiStack in the openshift-logging namespace, set the LokiStack namespace under the lokiStack configuration.
    4
    schema is one of otel, viaq, or select. The default is viaq if no value is specified. When you choose select, you can select the mode in the UI when you run a query.
    Note

    These are the known issues for the logging UI plugin - for more information, see OU-587.

    • The schema feature is only supported in Red Hat OpenShift Logging 4.15 and later. In earlier versions of Red Hat OpenShift Logging, the logging UI plugin will only use the viaq attribute, ignoring any other values that might be set.
    • Non-administrator users cannot query logs using the otel attribute with logging for Red Hat OpenShift versions 5.8 to 6.2. This issue will be fixed in a future logging release. (LOG-6589)
    • In logging for Red Hat OpenShift version 5.9, the severity_text Otel attribute is not set.
  4. Click Create.

Verification

  1. Refresh the page when a pop-up message instructs you to do so.
  2. Navigate to the Observe Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat