Security and compliance


Red Hat OpenShift Service on AWS 4

Configuring security context constraints on AWS clusters

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for configuring security context constraints.

You can implement an additional layer of security in your AWS account to prevent role assumption from non-allowlisted IP addresses.

1.1. Create an identity-based IAM policy

You can create an identity-based Identity and Access Management (IAM) policy that denies access to all AWS actions when the request originates from an IP address other than Red Hat provided IPs.

Prerequisites

  • You have access to the see AWS Management Console with the permissions required to create and modify IAM policies.

Procedure

  1. Sign in to the AWS Management Console using your AWS account credentials.
  2. Navigate to the IAM service.
  3. In the IAM console, select Policies from the left navigation menu.
  4. Click Create policy.
  5. Select the JSON tab to define the policy using JSON format.
  6. To get the IP addresses that you need to enter into the JSON policy document, run the following command:

    $ ocm get /api/clusters_mgmt/v1/trusted_ip_addresses
    Copy to Clipboard Toggle word wrap
    Note

    These IP addresses are not permanent and are subject to change. You must continuously review the API output and make the necessary updates in the JSON policy document.

  7. Copy and paste the following policy_document.json file into the editor:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Deny",
                "Action": "*",
                "Resource": "*",
                "Condition": {
                    "NotIpAddress": {
                        "aws:SourceIp": []
                    },
                    "Bool": {
                        "aws:ViaAWSService": "false"
                    }
                }
            }
        ]
    }
    Copy to Clipboard Toggle word wrap
  8. Copy and paste all of the IP addresses, which you got in Step 6, into the "aws:SourceIp": [] array in your policy_document.json file.
  9. Click Review and create.
  10. Provide a name and description for the policy, and review the details for accuracy.
  11. Click Create policy to save the policy.
Note

The condition key aws:ViaAWSService must be set to false to enable subsequent calls to succeed based on the initial call. For example, if you make an initial call to aws ec2 describe-instances, all subsequent calls made within the AWS API server to retrieve information about the EBS volumes attached to the ec2 instance will fail if the condition key aws:ViaAWSService is not set to false. The subsequent calls would fail because they would originate from AWS IP addresses, which are not included in the AllowList.

1.2. Attaching the identity-based IAM policy

Once you have created an identity-based IAM policy, attach it to the relevant IAM users, groups, or roles in your AWS account to prevent IP-based role assumption for those entities.

Procedure

  1. Navigate to the IAM console in the AWS Management Console.
  2. Select the default IAM ManagedOpenShift-Support-Role role to which you want to attach the policy.

    Note

    You can change the default IAM ManagedOpenShift-Support-Role role. For more information about roles, see Red Hat support access.

  3. In the Permissions tab, select Add Permissions or Create inline policy from the Add Permissions drop-down list.
  4. Search for the policy you created earlier by:

    1. Entering the policy name.
    2. Filtering by the appropriate category.
  5. Select the policy and click Attach policy.
Important

To ensure effective IP-based role assumption prevention, you must keep the allowlisted IPs up to date. Failure to do so may result in Red Hat site reliability engineering (SRE) being unable to access your account and affect your SLA. If you have further questions or require assistance, please reach out to our support team.

Chapter 2. Forwarding control plane logs

With Red Hat OpenShift Service on AWS you have a control plane log forwarder that is a separate system outside your cluster. You can use the control plane log forwarder to send your logs to either an Amazon CloudWatch group or Amazon S3 bucket, depending on your preference.

Since the Red Hat OpenShift Service on AWS control plane log forwarder is a managed system, it does not contend for resources against your workloads on your worker nodes.

2.1. Prerequisites

  • You have installed and configured the latest ROSA CLI on your installation host.
  • You have installed and configured the latest Amazon Web Services (AWS) command-line interface (CLI) on your installation host.

2.2. Determining what log groups to use

When you forward control plane logs to Amazon CloudWatch or S3, you must decide on what log groups you want to use. Because of the existing AWS pricing for the respective services, you can expect additional costs associated with forwarding and storing your logs in S3 and CloudWatch. When you determine what log group to use, consider these additional costs along with other factors, such as your log retention requirements.

For each log group, you have access to different applications, and these applications can change depending on what you choose to enable and disable with your logs.

See the following table to help you decide what log groups you need before you begin to forward your control plane logs:

Expand
Log group nameBenefit of that log groupExample applications available for that log group

API

Records every request made to the cluster. Helps security by detecting unauthorized access attempts.

  • audit-webhook
  • kube-apiserver
  • oauth-openshift
  • openshift-apiserver
  • openshift-oauth-apiserver
  • packageserver
  • validation-webhook

Authentication

Tracks login attempts and requests for tokens. Helps security by recording authenticated user information.

  • ignition-server
  • konnectivity-agent

Controller manager

Monitors the controllers that manages the state of your clusters. Helps explain the difference among the different cluster states, for example, the Current, Desired, Health, and Feature state.

  • aws-ebs-csi-driver-controller
  • capi-provider-controller-manager
  • catalog-operator
  • cloud-controller-manager
  • cloud-credential-operator
  • cloud-network-config-controller
  • cluster-network-operator
  • cluster-node-tuning-operator
  • cluster-policy-controller
  • cluster-version-operator
  • control-plane-operator
  • control-plane-pki-operator
  • csi-snapshot-controller-operator
  • csi-snapshot-controller
  • dns-operator
  • hosted-cluster-config-operator
  • ingress-operator
  • kube-controller-manager
  • machine-approver
  • multus-admission-controller
  • network-node-identity
  • olm-operator
  • openshift-controller-manager
  • openshift-route-controller-manager
  • ovnkube-control-plane

Scheduler

Records the placement of each pod on every node. Helps you understand why pods are in a Running or Pending state.

  • kube-scheduler

Other

Any log group different from API, Authentication, Controller manager, or Scheduler. Some other log groups include, Application, Infrastructure, Audit, Kubernetes API server, OpenShift API server, OAuth API server, and Node.

  • certified-operators-catalog
  • cluster-api
  • community-operators-catalog
  • etcd
  • private-router
  • redhat-marketplace-catalog
  • redhat-operators-catalog

2.3. Creating an IAM role and policy

When you forward your logs to an Amazon CloudWatch group or S3 bucket, those locations exist outside your control plane. You must create an IAM role and policy so that your log forwarder has the right permissions and capabilities to send these logs to your chosen destination, CloudWatch or S3.

Note

To use a CloudWatch group, you must create an IAM role and policy. To use an S3 bucket, you do not need an IAM role and policy. However, if you do not have an IAM role and policy created for the S3 bucket to use, then the encryption for the S3 bucket is limited to Amazon S3 managed keys, SSE-S3.

Procedure

  1. To enable the log forwarder delivery capability, prepare the IAM policy by creating an assume-role-policy.json file. Apply the following IAM policy sample:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::859037107838:role/ROSA-CentralLogDistributionRole-241c1a86"
                },
                "Action": "sts:AssumeRole"
            }
        ]
    }
    Copy to Clipboard Toggle word wrap
  2. To enable the log forwarder distribution capability, create an IAM role that must include the CustomerLogDistribution name by running the following command:

    $ aws iam create-role \
        --role-name CustomerLogDistribution-RH \
        --assume-role-policy-document file://assume-role-policy.json
    Copy to Clipboard Toggle word wrap

Next steps

After you create an IAM role and policy, you must decide to send your control plane logs to either a CloudWatch log group, an S3 bucket, or both. See the following summary about CloudWatch and S3 to help you decide what you want to do:

  • CloudWatch can help you when you have logs requiring immediate action or organization.
  • S3 can help you when you have logs needing long-term storage or large-scale data analysis.

2.4. Setting up the CloudWatch log group

If you have logs requiring immediate action or organization, set up an Amazon CloudWatch log group.

Prerequisites

  • You have created an IAM role and policy.

Procedure

  1. Create the CloudWatch log group by running the following command:

    $ aws logs create-log-group –log-group-name <your_log_group_name>
    Copy to Clipboard Toggle word wrap
  2. In your Red Hat OpenShift Service on AWS cluster, configure the log forwarder to use the CloudWatch log group by applying the following JSON sample:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "CreatePutLogs",
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": "<your_log_group_arn>:*"
            },
            {
                "Sid": "DescribeLogs",
                "Effect": "Allow",
                "Action": [
                    "logs:DescribeLogGroups",
                    "logs:DescribeLogStreams"
                ],
                "Resource": "*"
            }
        ]
    }
    Copy to Clipboard Toggle word wrap
  3. Attach the policy to the CloudWatch role by running the following command:

    $ aws iam put-role-policy \
        --role-name CustomerLogDistribution-RH \
        --policy-name Allow-CloudWatch-Writes \
        --policy-document file://cloudwatch-policy.json
    Copy to Clipboard Toggle word wrap
  4. Configure your Red Hat OpenShift Service on AWS cluster to forward logs to the CloudWatch log group by applying the following sample YAML list:

    cloudwatch:
      cloudwatch_log_role_arn: "arn:aws:iam::123456789012:role/RosaCloudWatch"
      cloudwatch_log_group_name: "rosa-logs"
      applications:
        - "<example_app1>"
      groups:
        - "<example_group1>"
    Copy to Clipboard Toggle word wrap
    <example_app1>
    Add one or more applications. For a list of applications, see the table in "Determining what log groups to use".
    <example_group1>
    Add one or more of the following groups: API, Authentication, Controller Manager, Scheduler, and Other.
  5. Enable the log forwarder to send logs to your Red Hat OpenShift Service on AWS cluster.

    1. To enable control plane log forwarding on a new cluster, include the log forwarding configuration by running the following command:

      $ rosa create cluster --log-fwd-config="<path_to_file>.yaml"
      Copy to Clipboard Toggle word wrap
    2. To enable control plane log forwarding on an existing cluster, include the log forwarding configuration by running the following command:

      $ rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml"
      Copy to Clipboard Toggle word wrap

2.5. Setting up the S3 bucket

If you have logs that need long-term storage or large-scale data analysis, set up an Amazon S3 bucket.

Prerequisites

  • If you want to prevent limitations for the managed keys for your S3 bucket, you must have created an IAM role and policy.

Procedure

  1. Create the S3 bucket by running the following command:

    $ aws s3api create-bucket \
        --bucket <your_s3_bucket_name> \
        --region <your_aws_region> \
        --create-bucket-configuration LocationConstraint=<cluster_aws_region>
    Copy to Clipboard Toggle word wrap
  2. Configure the policy for the S3 bucket by applying the following S3 bucket policy sample:

     "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowCentralLogDistributionWrite",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::859037107838:role/ROSA-CentralLogDistributionRole-241c1a86"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<your_s3_bucket_name>/*",
                "Condition": {
                    "StringEquals": {
                        "s3:x-amz-acl": "bucket-owner-full-control"
                    }
                }
            }
        ]
    }
    Copy to Clipboard Toggle word wrap
  3. Attach the policy to the S3 role by running the following command:

    $ aws s3api put-bucket-policy \
        --bucket <your_s3_bucket_name> \
        --policy file://s3-bucket-policy.json
    Copy to Clipboard Toggle word wrap
  4. Configure your Red Hat OpenShift Service on AWS cluster to forward logs to the S3 bucket by applying the following sample YAML list:

    s3:
      s3_config_bucket_name: "my-log-bucket"
      s3_config_bucket_prefix: "my-bucket-prefix"
      applications:
        - "<example_app1>"
      groups:
        - "<example_group1>"
    Copy to Clipboard Toggle word wrap
    <example_app1>
    Add one or more applications. For a list of applications, see the table in "Determining what log groups to use".
    <example_group1>
    Add one or more of the following groups: API, Authentication, Controller Manager, Scheduler, and Other.
  5. Enable the log forwarder to send logs to your Red Hat OpenShift Service on AWS cluster.

    1. To enable control plane log forwarding on a new cluster, include the log forwarding configuration by running the following command:

      $ rosa create cluster --log-fwd-config="<path_to_file>.yaml"
      Copy to Clipboard Toggle word wrap
    2. To enable control plane log forwarding on an existing cluster, include the log forwarding configuration by running the following command:

      $ rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml"
      Copy to Clipboard Toggle word wrap

2.6. Managing control plane log forwarding

After you configure the Red Hat OpenShift Service on AWS clusters to use your selected log forwarder for control plane logs, see the following commands to run based on your specific needs. For all of these commands, you must provide the clusterid or cluster name in the --cluster flag:

rosa create log-forwarder -c <cluster_name|cluster_id>
Configures your Red Hat OpenShift Service on AWS cluster to use the log forwarder.
rosa list log-forwarder -c <cluster_name|cluster_id>
Displays all of the log forwarder configurations for a Red Hat OpenShift Service on AWS cluster.
rosa describe log-forwarder -c <cluster_name|cluster_id> <log-fwd-id>
Provides more than the basic details for that specific log forwarder.
rosa edit log-forwarder -c <cluster_name|cluster_id> <log-fwd-id>
Enables you to make changes to the log forwarder. With the edit functionality, you can make changes to the following log forwarder fields: groups, applications, and S3 and CloudWatch configurations, depending on type of configuration.
rosa delete log-forwarder -c <cluster_name|cluster_id> <log-fwd-id>
Deletes the log forwarder configuration which stops your logs from being forwarded to your chosen destinations. Your logs are not automatically deleted. If you no longer want to store your logs in the S3 bucket or CloudWatch group, you can delete those specific logs. To make changes to the following log forwarder fields, use the delete functionality, then recreate these log forwarder fields, implementing your changes: ID, cluster ID, and the type for S3 and CloudWatch.

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top