Logging


Red Hat OpenShift Service on AWS 4

Logging installation, usage, and release notes on Red Hat OpenShift Service on AWS

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for configuring OpenShift Logging in Red Hat OpenShift Service on AWS (ROSA).

Chapter 1. About Logging

As a cluster administrator, you can deploy logging on your Red Hat OpenShift Service on AWS cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs.

You can use logging to perform the following tasks:

  • Forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage.
  • Visualize your log data in the Red Hat OpenShift Service on AWS web console.
Note

Because logging releases on a different cadence from Red Hat OpenShift Service on AWS, the logging documentation is available as a separate documentation set at Red Hat OpenShift Logging.

Chapter 2. Forwarding control plane logs

Red Hat OpenShift Service on AWS provides a control plane log forwarder that is a separate system outside your cluster. You can use the control plane log forwarder to send your logs to either an Amazon CloudWatch group or Amazon S3 bucket.

The Red Hat OpenShift Service on AWS control plane log forwarder is a managed system and it does not use resources reserved for workloads on your worker nodes.

2.1. Determining what log groups to use

When you forward control plane logs to Amazon CloudWatch or S3, you must decide on what log groups you want to use. Because of the existing AWS pricing for the respective services, you can expect additional costs associated with forwarding and storing your logs in S3 and CloudWatch. When you determine what log group to use, consider these additional costs along with other factors, such as your log retention requirements.

For each log group, you have access to different applications, and these applications can change depending on what you choose to enable and disable with your logs.

When you forward log groups, you must specify a group or application. When you specify a group, the log forwarder collects all the applications in that group. Instead of selecting a group, you can select individual applications. When you set up your log forwarder, you must specify at least one group or application, but you do not need to specify both.

The following table lists available log groups:

Expand
Table 2.1. Log groups
Log group nameBenefit of that log groupExample applications available for that log group

api

Records every request made to the cluster. Supports security by detecting unauthorized access attempts.

  • audit-webhook
  • kube-apiserver
  • oauth-openshift
  • openshift-apiserver
  • openshift-oauth-apiserver
  • packageserver
  • validation-webhook

authentication

Tracks login attempts and requests for tokens. Supports security by recording authenticated user information.

  • ignition-server
  • konnectivity-agent

controller manager

Monitors the controllers that manage the state of your clusters. Clarifies differences among the different cluster states, for example, the Current, Desired, Health, and Feature state.

  • aws-ebs-csi-driver-controller
  • capi-provider-controller-manager
  • catalog-operator
  • cloud-controller-manager
  • cloud-credential-operator
  • cloud-network-config-controller
  • cluster-network-operator
  • cluster-node-tuning-operator
  • cluster-policy-controller
  • cluster-version-operator
  • control-plane-operator
  • control-plane-pki-operator
  • csi-snapshot-controller-operator
  • csi-snapshot-controller
  • dns-operator
  • hosted-cluster-config-operator
  • ingress-operator
  • kube-controller-manager
  • machine-approver
  • multus-admission-controller
  • network-node-identity
  • olm-operator
  • openshift-controller-manager
  • openshift-route-controller-manager
  • ovnkube-control-plane

scheduler

Records the placement of each pod on every node. Shows why pods are in a Running or Pending state.

  • kube-scheduler

not applicable

These applications do not belong to a defined log group. To forward their logs, set these applications in the applications array.

  • certified-operators-catalog
  • cluster-api
  • community-operators-catalog
  • etcd
  • private-router
  • redhat-marketplace-catalog
  • redhat-operators-catalog

2.2. Creating an IAM role and policy

When you forward your logs to an Amazon CloudWatch group or S3 bucket, those locations exist outside your control plane. You must create an Identity and Access Management (IAM) role and policy so that your log forwarder has the right permissions and capabilities to send these logs to your chosen destination, CloudWatch, or S3.

Note
  • To use a CloudWatch group, you must create an IAM role and policy.
  • To use an S3 bucket, you do not need an IAM role and policy.
  • The only supported Amazon S3-managed encryption method is SSE-S3.

Prerequisites

  • You have ensured that the name of you your IAM role has the prefix, arn:aws:iam::*:role/CustomerLogDistribution-*.
  • You have installed and configured the latest ROSA command-line interface (CLI) (rosa) on your installation host.
  • You have installed and configured the latest Amazon Web Services (AWS) command-line interface (CLI) on your installation host.

Procedure

  1. To enable the log forwarder delivery capability, prepare the IAM policy by creating an assume-role-policy.json file. Apply the following IAM policy sample:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::859037107838:role/ROSA-CentralLogDistributionRole-241c1a86"
                },
                "Action": "sts:AssumeRole"
            }
        ]
    }
  2. To enable the log forwarder distribution capability, create an IAM role that must include the CustomerLogDistribution name by running the following command:

    $ aws iam create-role \
        --role-name CustomerLogDistribution-RH \
        --assume-role-policy-document file://assume-role-policy.json

Next steps

After you create an IAM role and policy, you must decide to send your control plane logs to either a CloudWatch log group, an S3 bucket, or both. See the following summary about CloudWatch and S3 to help you decide what you want to do:

  • Use CloudWatch for logs requiring immediate action or organization.
  • Use S3 for logs requiring long-term storage or large-scale data analysis.

2.3. Setting up the CloudWatch log group

If you have logs that require immediate action or organization, set up an Amazon CloudWatch log group.

Prerequisites

  • You have created an IAM role and policy.
  • You have ensured that the name of you your IAM role has the prefix, CustomerLogDistribution.

Procedure

  1. Create the CloudWatch log group by running the following command:

    $ aws logs create-log-group –log-group-name <your_log_group_name>
  2. In your Red Hat OpenShift Service on AWS cluster, configure the log forwarder to use the CloudWatch log group by applying the following JSON sample:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "CreatePutLogs",
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": "<your_log_group_arn>:*"
            },
            {
                "Sid": "DescribeLogs",
                "Effect": "Allow",
                "Action": [
                    "logs:DescribeLogGroups",
                    "logs:DescribeLogStreams"
                ],
                "Resource": "*"
            }
        ]
    }
  3. Attach the policy to the CloudWatch role by running the following command:

    $ aws iam put-role-policy \
        --role-name CustomerLogDistribution-RH \
        --policy-name Allow-CloudWatch-Writes \
        --policy-document file://cloudwatch-policy.json
  4. Configure your Red Hat OpenShift Service on AWS cluster to forward logs to the CloudWatch log group by applying the following sample YAML list. Specify an application, or group, or both:

    cloudwatch:
      cloudwatch_log_role_arn: "arn:aws:iam::123456789012:role/RosaCloudWatch"
      cloudwatch_log_group_name: "rosa-logs"
      applications:
        - "<example_app1>"
      groups:
        - "<example_group1>"

    where:

    <example_app1>
    Add one or more applications. For a list of applications, see the table in "Determining what log groups to use".
    <example_group1>
    Add one or more of the following groups: api, authentication, controller manager, scheduler.
  5. Enable the log forwarder to send logs to your Red Hat OpenShift Service on AWS cluster.

    1. To enable control plane log forwarding on a new cluster, include the log forwarding configuration by running the following command:

      $ rosa create cluster --log-fwd-config="<path_to_file>.yaml"
    2. To enable control plane log forwarding on an existing cluster, include the log forwarding configuration by running the following command:

      $ rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml" -o yaml
  6. Optional: For an example for forwarding logs to the CloudWatch log group, apply the following sample YAML:

    cloudwatch:
      cloudwatch_log_role_arn: "cloudwatch-log-role-arn"
      cloudwatch_log_group_name: "cloudwatch-group-name"
      applications:
        - "<example_app1>"
      groups:
        - "<example_group1>"

2.4. Setting up the S3 bucket

If you have logs that need long-term storage or large-scale data analysis, set up an Amazon S3 bucket.

Prerequisites

  • If you want to prevent limitations for the managed keys for your S3 bucket, you must have created an IAM role and policy.
  • You have ensured that the name of you your IAM role has the prefix, CustomerLogDistribution.

Procedure

  1. Create the S3 bucket by running the following command:

    $ aws s3api create-bucket \
        --bucket <your_s3_bucket_name> \
        --region <your_aws_region> \
        --create-bucket-configuration LocationConstraint=<cluster_aws_region>
  2. Configure the policy for the S3 bucket by applying the following S3 bucket policy sample:

     "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowCentralLogDistributionWrite",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::859037107838:role/ROSA-CentralLogDistributionRole-241c1a86"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<your_s3_bucket_name>/*",
                "Condition": {
                    "StringEquals": {
                        "s3:x-amz-acl": "bucket-owner-full-control"
                    }
                }
            }
        ]
    }
  3. Attach the policy to the S3 role by running the following command:

    $ aws s3api put-bucket-policy \
        --bucket <your_s3_bucket_name> \
        --policy file://s3-bucket-policy.json
  4. Configure your Red Hat OpenShift Service on AWS cluster to forward logs to the S3 bucket by applying the following sample YAML list. Specify an application, or group, or both:

    s3:
      s3_config_bucket_name: "my-log-bucket"
      s3_config_bucket_prefix: "my-bucket-prefix"
      applications:
        - "<example_app1>"
      groups:
        - "<example_group1>"
    <example_app1>
    Add one or more applications. For a list of applications, see the table in "Determining what log groups to use".
    <example_group1>
    Add one or more of the following groups: api, authentication, controller manager, scheduler.
  5. Enable the log forwarder to send logs to your Red Hat OpenShift Service on AWS cluster.

    1. To enable control plane log forwarding on a new cluster, include the log forwarding configuration by running the following command:

      $ rosa create cluster --log-fwd-config="<path_to_file>.yaml"
    2. To enable control plane log forwarding on an existing cluster, include the log forwarding configuration by running the following command:

      $ rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml" -o yaml
  6. Optional: For an example for forwarding logs to the S3 bucket, apply the following sample YAML:

    s3:
      s3_config_bucket_name: "s3-bucket-name"
      s3_config_bucket_prefix: "s3-bucket-prefix"
      groups:
        - "<example_group1>"

2.5. Managing control plane log forwarding

After you configure the Red Hat OpenShift Service on AWS clusters to use your selected log forwarder for control plane logs, see the following commands to run based on your specific needs. For all of these commands, you must provide the clusterid or cluster name in the --cluster flag:

rosa create log-forwarder -c <cluster_name|cluster_id>
Configures your Red Hat OpenShift Service on AWS cluster to use the log forwarder.
rosa list log-forwarder -c <cluster_name|cluster_id>
Displays all of the log forwarder configurations for a Red Hat OpenShift Service on AWS cluster.
rosa describe log-forwarder -c <cluster_name|cluster_id> <log_fwd_id>
Provides additional details for a specific log forwarder.
rosa edit log-forwarder -c <cluster_name|cluster_id> <log_fwd_id>
Changes the following log forwarder fields: groups, applications, and S3 and CloudWatch configurations.
rosa delete log-forwarder -c <cluster_name|cluster_id> <log_fwd_id>

Deletes the log forwarder configuration. Logs are no longer forwarded to your chosen destinations but are not automatically deleted. If you no longer want to store your logs in the S3 bucket or CloudWatch group, delete those logs.

Additionally, use this command to change the following log forwarder fields: ID, cluster ID, and the type for S3 and CloudWatch. Delete a log forwarder and re-create it with the updated values.

Legal Notice

Copyright © Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of the OpenJS Foundation.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top