此内容没有您所选择的语言版本。

Chapter 9. Viewing logs and audit records


As a cluster administrator, you can use the OpenShift AI Operator logger to monitor and troubleshoot issues. You can also use OpenShift audit records to review a history of changes made to the OpenShift AI Operator configuration.

9.1. Configuring the OpenShift AI Operator logger

You can change the log level for OpenShift AI Operator components by setting the .spec.devFlags.logmode flag for the DSC Initialization/DSCI custom resource during runtime. If you do not set a logmode value, the logger uses the INFO log level by default.

The log level that you set with .spec.devFlags.logmode applies to all components, not just those in a Managed state.

The following table shows the available log levels:

Expand
Log levelStacktrace levelVerbosityOutputTimestamp type

devel or development

WARN

INFO

Console

Epoch timestamps

"" (or no logmode value set)

ERROR

INFO

JSON

Human-readable timestamps

prod or production

ERROR

INFO

JSON

Human-readable timestamps

Logs that are set to devel or development generate in a plain text console format. Logs that are set to prod, production, or which do not have a level set generate in a JSON format.

Prerequisites

  • You have administrator access to the DSCInitialization resources in the OpenShift cluster.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

Procedure

  1. Log in to the OpenShift as a cluster administrator.
  2. Click Ecosystem Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Click the DSC Initialization tab.
  4. Click the default-dsci object.
  5. Click the YAML tab.
  6. In the spec section, update the .spec.devFlags.logmode flag with the log level that you want to set.

    apiVersion: dscinitialization.opendatahub.io/v2
    kind: DSCInitialization
    metadata:
      name: default-dsci
    spec:
      devFlags:
        logmode: development
  7. Click Save.

You can also configure the log level from the OpenShift CLI (oc) by using the following command with the logmode value set to the log level that you want.

oc patch dsci default-dsci -p '{"spec":{"devFlags":{"logmode":"development"}}}' --type=merge

Verification

  • If you set the component log level to devel or development, logs generate more frequently and include logs at WARN level and above.
  • If you set the component log level to prod or production, or do not set a log level, logs generate less frequently and include logs at ERROR level or above.

9.1.1. Viewing the OpenShift AI Operator logs

  1. Log in to the OpenShift CLI (oc).
  2. Run the following command to stream logs from all Operator pods:

    for pod in $(oc get pods -l name=rhods-operator -n redhat-ods-operator -o name); do
      oc logs -f "$pod" -n redhat-ods-operator &
    done

    The Operator pod logs open in your terminal.

    Tip

    Press Ctrl+C to stop viewing. To fully stop all log streams, run kill $(jobs -p).

You can also view each Operator pod log in the OpenShift console by navigating to Workloads Pods, selecting the redhat-ods-operator project, clicking a pod name, and then clicking the Logs tab.

9.2. Viewing audit records

Cluster administrators can use OpenShift auditing to see changes made to the OpenShift AI Operator configuration by reviewing modifications to the DataScienceCluster (DSC) and DSCInitialization (DSCI) custom resources. Audit logging is enabled by default in standard OpenShift cluster configurations. For more information, see Viewing audit logs in the OpenShift documentation.

Note

In Red Hat OpenShift Service on AWS, audit logging is disabled by default because the Elasticsearch log store does not provide secure storage for audit logs. To configure log forwarding, see Logging in the Red Hat OpenShift Service on AWS documentation.

The following example shows how to use the OpenShift audit logs to see the history of changes made (by users) to the DSC and DSCI custom resources.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.
  • You have installed the OpenShift CLI (oc) as described in the appropriate documentation for your cluster:

Procedure

  1. In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example:

    $ oc login <openshift_cluster_url> -u <admin_username> -p <password>
  2. To access the full content of the changed custom resources, set the OpenShift audit log policy to WriteRequestBodies or a more comprehensive profile. For more information, see Configuring the audit log policy.
  3. Fetch the audit log files that are available for the relevant control plane nodes. For example:

    oc adm node-logs --role=master --path=kube-apiserver/ \
      | awk '{ print $1 }' | sort -u \
      | while read node ; do
          oc adm node-logs $node --path=kube-apiserver/audit.log < /dev/null
        done \
      | grep opendatahub > /tmp/kube-apiserver-audit-opendatahub.log
  4. Search the files for the DSC and DSCI custom resources. For example:

    jq 'select((.objectRef.apiGroup == "dscinitialization.opendatahub.io"
                    or .objectRef.apiGroup == "datasciencecluster.opendatahub.io")
                  and .user.username != "system:serviceaccount:redhat-ods-operator:redhat-ods-operator-controller-manager"
                  and .verb != "get" and .verb != "watch" and .verb != "list")' < /tmp/kube-apiserver-audit-opendatahub.log

Verification

  • The commands return relevant log entries.
Tip

To configure the log retention time, see the Logging section in the OpenShift documentation.

Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部