Chapter 10. Viewing logs and audit records


As a cluster administrator, you can use the OpenShift AI Operator logger to monitor and troubleshoot issues. You can also use OpenShift audit records to review a history of changes made to the OpenShift AI Operator configuration.

10.1. Configuring the OpenShift AI Operator logger

You can change the log level for OpenShift AI Operator components by setting the .spec.devFlags.logmode flag for the DSC Initialization/DSCI custom resource during runtime. If you do not set a logmode value, the logger uses the INFO log level by default.

The log level that you set with .spec.devFlags.logmode applies to all components, not just those in a Managed state.

The following table shows the available log levels:

Expand
Log levelStacktrace levelVerbosityOutputTimestamp type

devel or development

WARN

INFO

Console

Epoch timestamps

"" (or no logmode value set)

ERROR

INFO

JSON

Human-readable timestamps

prod or production

ERROR

INFO

JSON

Human-readable timestamps

Logs that are set to devel or development generate in a plain text console format. Logs that are set to prod, production, or which do not have a level set generate in a JSON format.

Prerequisites

  • You have administrator access to the DSCInitialization resources in the OpenShift cluster.
  • You installed the OpenShift command line interface (oc) as described in Installing the OpenShift CLI.

Procedure

  1. Log in to the OpenShift as a cluster administrator.
  2. Click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Click the DSC Initialization tab.
  4. Click the default-dsci object.
  5. Click the YAML tab.
  6. In the spec section, update the .spec.devFlags.logmode flag with the log level that you want to set.

    apiVersion: dscinitialization.opendatahub.io/v1
    kind: DSCInitialization
    metadata:
      name: default-dsci
    spec:
      devFlags:
        logmode: development
    Copy to Clipboard Toggle word wrap
  7. Click Save.

You can also configure the log level from the OpenShift CLI by using the following command with the logmode value set to the log level that you want.

oc patch dsci default-dsci -p '{"spec":{"devFlags":{"logmode":"development"}}}' --type=merge
Copy to Clipboard Toggle word wrap

Verification

  • If you set the component log level to devel or development, logs generate more frequently and include logs at WARN level and above.
  • If you set the component log level to prod or production, or do not set a log level, logs generate less frequently and include logs at ERROR level or above.

10.1.1. Viewing the OpenShift AI Operator logs

  1. Log in to the OpenShift CLI.
  2. Run the following command to stream logs from all Operator pods:

    for pod in $(oc get pods -l name=rhods-operator -n redhat-ods-operator -o name); do
      oc logs -f "$pod" -n redhat-ods-operator &
    done
    Copy to Clipboard Toggle word wrap

    The Operator pod logs open in your terminal.

    Tip

    Press Ctrl+C to stop viewing. To fully stop all log streams, run kill $(jobs -p).

You can also view each Operator pod log in the OpenShift console by navigating to Workloads Pods, selecting the redhat-ods-operator project, clicking a pod name, and then clicking the Logs tab.

10.2. Viewing audit records

Cluster administrators can use OpenShift auditing to see changes made to the OpenShift AI Operator configuration by reviewing modifications to the DataScienceCluster (DSC) and DSCInitialization (DSCI) custom resources. Audit logging is enabled by default in standard OpenShift cluster configurations. For more information, see Viewing audit logs in the OpenShift documentation.

Note

In Red Hat OpenShift Service on Amazon Web Services with hosted control planes (ROSA HCP), audit logging is disabled by default because the Elasticsearch log store does not provide secure storage for audit logs. To configure log forwarding, see the Logging section in the OpenShift documentation.

The following example shows how to use the OpenShift audit logs to see the history of changes made (by users) to the DSC and DSCI custom resources.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.
  • You installed the OpenShift command line interface (oc) as described in Installing the OpenShift CLI.

Procedure

  1. In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example:

    $ oc login <openshift_cluster_url> -u <admin_username> -p <password>
    Copy to Clipboard Toggle word wrap
  2. To access the full content of the changed custom resources, set the OpenShift audit log policy to WriteRequestBodies or a more comprehensive profile. For more information, see Configuring the audit log policy.
  3. Fetch the audit log files that are available for the relevant control plane nodes. For example:

    oc adm node-logs --role=master --path=kube-apiserver/ \
      | awk '{ print $1 }' | sort -u \
      | while read node ; do
          oc adm node-logs $node --path=kube-apiserver/audit.log < /dev/null
        done \
      | grep opendatahub > /tmp/kube-apiserver-audit-opendatahub.log
    Copy to Clipboard Toggle word wrap
  4. Search the files for the DSC and DSCI custom resources. For example:

    jq 'select((.objectRef.apiGroup == "dscinitialization.opendatahub.io"
                    or .objectRef.apiGroup == "datasciencecluster.opendatahub.io")
                  and .user.username != "system:serviceaccount:redhat-ods-operator:redhat-ods-operator-controller-manager"
                  and .verb != "get" and .verb != "watch" and .verb != "list")' < /tmp/kube-apiserver-audit-opendatahub.log
    Copy to Clipboard Toggle word wrap

Verification

  • The commands return relevant log entries.
Tip

To configure the log retention time, see the Logging section in the OpenShift documentation.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat