Este conteúdo não está disponível no idioma selecionado.
Security and compliance
Configuring security context constraints on AWS clusters
Abstract
Chapter 1. Adding additional constraints for IP-based AWS role assumption Copiar o linkLink copiado para a área de transferência!
You can implement an additional layer of security in your AWS account to prevent role assumption from non-allowlisted IP addresses.
1.1. Create an identity-based IAM policy Copiar o linkLink copiado para a área de transferência!
You can create an identity-based Identity and Access Management (IAM) policy that denies access to all AWS actions when the request originates from an IP address other than Red Hat provided IPs.
Prerequisites
- You have access to the see AWS Management Console with the permissions required to create and modify IAM policies.
Procedure
- Sign in to the AWS Management Console using your AWS account credentials.
- Navigate to the IAM service.
- In the IAM console, select Policies from the left navigation menu.
- Click Create policy.
- Select the JSON tab to define the policy using JSON format.
To get the IP addresses that you need to enter into the JSON policy document, run the following command:
ocm get /api/clusters_mgmt/v1/trusted_ip_addresses
$ ocm get /api/clusters_mgmt/v1/trusted_ip_addressesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThese IP addresses are not permanent and are subject to change. You must continuously review the API output and make the necessary updates in the JSON policy document.
Copy and paste the following
policy_document.jsonfile into the editor:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Copy and paste all of the IP addresses, which you got in Step 6, into the
"aws:SourceIp": []array in yourpolicy_document.jsonfile. - Click Review and create.
- Provide a name and description for the policy, and review the details for accuracy.
- Click Create policy to save the policy.
The condition key aws:ViaAWSService must be set to false to enable subsequent calls to succeed based on the initial call. For example, if you make an initial call to aws ec2 describe-instances, all subsequent calls made within the AWS API server to retrieve information about the EBS volumes attached to the ec2 instance will fail if the condition key aws:ViaAWSService is not set to false. The subsequent calls would fail because they would originate from AWS IP addresses, which are not included in the AllowList.
1.2. Attaching the identity-based IAM policy Copiar o linkLink copiado para a área de transferência!
Once you have created an identity-based IAM policy, attach it to the relevant IAM users, groups, or roles in your AWS account to prevent IP-based role assumption for those entities.
Procedure
- Navigate to the IAM console in the AWS Management Console.
Select the default IAM
ManagedOpenShift-Support-Rolerole to which you want to attach the policy.NoteYou can change the default IAM
ManagedOpenShift-Support-Rolerole. For more information about roles, see Red Hat support access.- In the Permissions tab, select Add Permissions or Create inline policy from the Add Permissions drop-down list.
Search for the policy you created earlier by:
- Entering the policy name.
- Filtering by the appropriate category.
- Select the policy and click Attach policy.
To ensure effective IP-based role assumption prevention, you must keep the allowlisted IPs up to date. Failure to do so may result in Red Hat site reliability engineering (SRE) being unable to access your account and affect your SLA. If you have further questions or require assistance, please reach out to our support team.
Chapter 2. Forwarding control plane logs Copiar o linkLink copiado para a área de transferência!
With Red Hat OpenShift Service on AWS you have a control plane log forwarder that is a separate system outside your cluster. You can use the control plane log forwarder to send your logs to either an Amazon CloudWatch group or Amazon S3 bucket, depending on your preference.
Since the Red Hat OpenShift Service on AWS control plane log forwarder is a managed system, it does not contend for resources against your workloads on your worker nodes.
2.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
- You have installed and configured the latest ROSA CLI on your installation host.
- You have installed and configured the latest Amazon Web Services (AWS) command-line interface (CLI) on your installation host.
2.2. Determining what log groups to use Copiar o linkLink copiado para a área de transferência!
When you forward control plane logs to Amazon CloudWatch or S3, you must decide on what log groups you want to use. Because of the existing AWS pricing for the respective services, you can expect additional costs associated with forwarding and storing your logs in S3 and CloudWatch. When you determine what log group to use, consider these additional costs along with other factors, such as your log retention requirements.
For each log group, you have access to different applications, and these applications can change depending on what you choose to enable and disable with your logs.
See the following table to help you decide what log groups you need before you begin to forward your control plane logs:
| Log group name | Benefit of that log group | Example applications available for that log group |
|---|---|---|
| API | Records every request made to the cluster. Helps security by detecting unauthorized access attempts. |
|
| Authentication | Tracks login attempts and requests for tokens. Helps security by recording authenticated user information. |
|
| Controller manager |
Monitors the controllers that manages the state of your clusters. Helps explain the difference among the different cluster states, for example, the |
|
| Scheduler |
Records the placement of each pod on every node. Helps you understand why pods are in a |
|
| Other |
Any log group different from |
|
2.3. Creating an IAM role and policy Copiar o linkLink copiado para a área de transferência!
When you forward your logs to an Amazon CloudWatch group or S3 bucket, those locations exist outside your control plane. You must create an IAM role and policy so that your log forwarder has the right permissions and capabilities to send these logs to your chosen destination, CloudWatch or S3.
To use a CloudWatch group, you must create an IAM role and policy. To use an S3 bucket, you do not need an IAM role and policy. However, if you do not have an IAM role and policy created for the S3 bucket to use, then the encryption for the S3 bucket is limited to Amazon S3 managed keys, SSE-S3.
Procedure
To enable the log forwarder delivery capability, prepare the IAM policy by creating an
assume-role-policy.jsonfile. Apply the following IAM policy sample:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable the log forwarder distribution capability, create an IAM role that must include the
CustomerLogDistributionname by running the following command:aws iam create-role \ --role-name CustomerLogDistribution-RH \ --assume-role-policy-document file://assume-role-policy.json$ aws iam create-role \ --role-name CustomerLogDistribution-RH \ --assume-role-policy-document file://assume-role-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
After you create an IAM role and policy, you must decide to send your control plane logs to either a CloudWatch log group, an S3 bucket, or both. See the following summary about CloudWatch and S3 to help you decide what you want to do:
- CloudWatch can help you when you have logs requiring immediate action or organization.
- S3 can help you when you have logs needing long-term storage or large-scale data analysis.
2.4. Setting up the CloudWatch log group Copiar o linkLink copiado para a área de transferência!
If you have logs requiring immediate action or organization, set up an Amazon CloudWatch log group.
Prerequisites
- You have created an IAM role and policy.
Procedure
Create the CloudWatch log group by running the following command:
aws logs create-log-group –log-group-name <your_log_group_name>
$ aws logs create-log-group –log-group-name <your_log_group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In your Red Hat OpenShift Service on AWS cluster, configure the log forwarder to use the CloudWatch log group by applying the following JSON sample:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policy to the CloudWatch role by running the following command:
aws iam put-role-policy \ --role-name CustomerLogDistribution-RH \ --policy-name Allow-CloudWatch-Writes \ --policy-document file://cloudwatch-policy.json$ aws iam put-role-policy \ --role-name CustomerLogDistribution-RH \ --policy-name Allow-CloudWatch-Writes \ --policy-document file://cloudwatch-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your Red Hat OpenShift Service on AWS cluster to forward logs to the CloudWatch log group by applying the following sample YAML list:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - <example_app1>
- Add one or more applications. For a list of applications, see the table in "Determining what log groups to use".
- <example_group1>
-
Add one or more of the following groups:
API,Authentication,Controller Manager,Scheduler, andOther.
Enable the log forwarder to send logs to your Red Hat OpenShift Service on AWS cluster.
To enable control plane log forwarding on a new cluster, include the log forwarding configuration by running the following command:
rosa create cluster --log-fwd-config="<path_to_file>.yaml"
$ rosa create cluster --log-fwd-config="<path_to_file>.yaml"Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable control plane log forwarding on an existing cluster, include the log forwarding configuration by running the following command:
rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml"
$ rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Setting up the S3 bucket Copiar o linkLink copiado para a área de transferência!
If you have logs that need long-term storage or large-scale data analysis, set up an Amazon S3 bucket.
Prerequisites
- If you want to prevent limitations for the managed keys for your S3 bucket, you must have created an IAM role and policy.
Procedure
Create the S3 bucket by running the following command:
aws s3api create-bucket \ --bucket <your_s3_bucket_name> \ --region <your_aws_region> \ --create-bucket-configuration LocationConstraint=<cluster_aws_region>$ aws s3api create-bucket \ --bucket <your_s3_bucket_name> \ --region <your_aws_region> \ --create-bucket-configuration LocationConstraint=<cluster_aws_region>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the policy for the S3 bucket by applying the following S3 bucket policy sample:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policy to the S3 role by running the following command:
aws s3api put-bucket-policy \ --bucket <your_s3_bucket_name> \ --policy file://s3-bucket-policy.json$ aws s3api put-bucket-policy \ --bucket <your_s3_bucket_name> \ --policy file://s3-bucket-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your Red Hat OpenShift Service on AWS cluster to forward logs to the S3 bucket by applying the following sample YAML list:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - <example_app1>
- Add one or more applications. For a list of applications, see the table in "Determining what log groups to use".
- <example_group1>
-
Add one or more of the following groups:
API,Authentication,Controller Manager,Scheduler, andOther.
Enable the log forwarder to send logs to your Red Hat OpenShift Service on AWS cluster.
To enable control plane log forwarding on a new cluster, include the log forwarding configuration by running the following command:
rosa create cluster --log-fwd-config="<path_to_file>.yaml"
$ rosa create cluster --log-fwd-config="<path_to_file>.yaml"Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable control plane log forwarding on an existing cluster, include the log forwarding configuration by running the following command:
rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml"
$ rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Managing control plane log forwarding Copiar o linkLink copiado para a área de transferência!
After you configure the Red Hat OpenShift Service on AWS clusters to use your selected log forwarder for control plane logs, see the following commands to run based on your specific needs. For all of these commands, you must provide the clusterid or cluster name in the --cluster flag:
rosa create log-forwarder -c <cluster_name|cluster_id>- Configures your Red Hat OpenShift Service on AWS cluster to use the log forwarder.
rosa list log-forwarder -c <cluster_name|cluster_id>- Displays all of the log forwarder configurations for a Red Hat OpenShift Service on AWS cluster.
rosa describe log-forwarder -c <cluster_name|cluster_id> <log-fwd-id>- Provides more than the basic details for that specific log forwarder.
rosa edit log-forwarder -c <cluster_name|cluster_id> <log-fwd-id>- Enables you to make changes to the log forwarder. With the edit functionality, you can make changes to the following log forwarder fields: groups, applications, and S3 and CloudWatch configurations, depending on type of configuration.
rosa delete log-forwarder -c <cluster_name|cluster_id> <log-fwd-id>- Deletes the log forwarder configuration which stops your logs from being forwarded to your chosen destinations. Your logs are not automatically deleted. If you no longer want to store your logs in the S3 bucket or CloudWatch group, you can delete those specific logs. To make changes to the following log forwarder fields, use the delete functionality, then recreate these log forwarder fields, implementing your changes: ID, cluster ID, and the type for S3 and CloudWatch.
Legal Notice
Copiar o linkLink copiado para a área de transferência!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.