Security and compliance
Configuring security context constraints on AWS clusters
Abstract
Chapter 1. Adding additional constraints for IP-based AWS role assumption Copy linkLink copied to clipboard!
Create an identity-based policy that denies requests from non-allowlisted IP addresses. Restricting role access can improve your AWS account security.
1.1. Creating an identity-based IAM policy Copy linkLink copied to clipboard!
Create an Identity and Access Management (IAM) policy that denies access to all AWS actions if the request is made from an IP address not provided by Red Hat.
Prerequisites
- You have access to the AWS Management Console with the permissions required to create and modify IAM policies.
Procedure
- Sign in to the AWS Management Console using your AWS account credentials.
- Navigate to the IAM service.
- In the IAM console, select Policies from the left navigation menu.
- Click Create policy.
- Select the JSON tab to define the policy using JSON format.
To get the IP addresses required for the JSON policy document, run the following command:
$ ocm get /api/clusters_mgmt/v1/trusted_ip_addressesNoteThese IP addresses are not permanent and can change. Regularly review the API output and update the JSON policy document.
Copy and paste the following
policy_document.jsonfile into the editor:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": "*", "Resource": "*", "Condition": { "NotIpAddress": { "aws:SourceIp": [] }, "Bool": { "aws:ViaAWSService": "false" } } } ] }-
Copy and paste all of the IP addresses, which you got in Step 6, into the
"aws:SourceIp": []array in yourpolicy_document.jsonfile. - Click Review and create.
- Provide a name and description for the policy, and review the details for accuracy.
Click Create policy to save the policy.
NoteSet the
aws:ViaAWSServicecondition key to false to ensure that subsequent calls succeed after your initial call. For example, if you do not setaws:ViaAWSServiceto false and runaws ec2 describe-instances, some follow-up calls can fail. It applies to subsequent calls that you make within the AWS API server to retrieve information about the Elastic Block Store (EBS) volumes attached to the EC2 instance. The subsequent calls fail because they originate from AWS IP addresses that are not included in the AllowList.
1.2. Attaching the identity-based IAM policy Copy linkLink copied to clipboard!
After you create an Identity and Access Management (IAM) policy, attach it to the relevant IAM users, groups, or roles in your AWS account. The policy prevents IP-based role assumption for these entities.
Procedure
- Navigate to the IAM console in the AWS Management Console.
Select the default IAM
ManagedOpenShift-Support-Rolerole to attach the policy.NoteYou can change the default IAM
ManagedOpenShift-Support-Rolerole. For more information about roles, see Red Hat support access.- In the Permissions tab, select Add Permissions or Create inline policy from the Add Permissions drop-down list.
Search for the policy you created earlier by:
- Entering the policy name.
- Filtering by the appropriate category.
Select the policy and click Attach policy.
ImportantTo prevent IP-based role assumption, keep the allowlisted IPs up-to-date. Outdated IPs can block Red Hat site reliability engineering (SRE) from accessing your account and affect your Service Level Agreement (SLA).
Chapter 2. Forwarding control plane logs Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS provides a control plane log forwarder that is a separate system outside your cluster. You can use the control plane log forwarder to send your logs to either an Amazon CloudWatch group or Amazon S3 bucket.
The Red Hat OpenShift Service on AWS control plane log forwarder is a managed system and it does not use resources reserved for workloads on your worker nodes.
2.1. Determining what log groups to use Copy linkLink copied to clipboard!
When you forward control plane logs to Amazon CloudWatch or S3, you must decide on what log groups you want to use. Because of the existing AWS pricing for the respective services, you can expect additional costs associated with forwarding and storing your logs in S3 and CloudWatch. When you determine what log group to use, consider these additional costs along with other factors, such as your log retention requirements.
For each log group, you have access to different applications, and these applications can change depending on what you choose to enable and disable with your logs.
When you forward log groups, you must specify a group or application. When you specify a group, the log forwarder collects all the applications in that group. Instead of selecting a group, you can select individual applications. When you set up your log forwarder, you must specify at least one group or application, but you do not need to specify both.
The following table lists available log groups:
| Log group name | Benefit of that log group | Example applications available for that log group |
|---|---|---|
| api | Records every request made to the cluster. Supports security by detecting unauthorized access attempts. |
|
| authentication | Tracks login attempts and requests for tokens. Supports security by recording authenticated user information. |
|
| controller manager |
Monitors the controllers that manages the state of your clusters. Clarifies differences among the different cluster states, for example, the |
|
| scheduler |
Records the placement of each pod on every node. Shows why pods are in a |
|
| not applicable |
These applications do not belong to a defined log group. To forward their logs, set these applications in the |
|
2.2. Creating an IAM role and policy Copy linkLink copied to clipboard!
When you forward your logs to an Amazon CloudWatch group or S3 bucket, those locations exist outside your control plane. You must create an Identity and Access Management (IAM) role and policy so that your log forwarder has the right permissions and capabilities to send these logs to your chosen destination, CloudWatch or S3.
To use a CloudWatch group, you must create an IAM role and policy. To use an S3 bucket, you do not need an IAM role and policy. However, if you do not have an IAM role and policy created for the S3 bucket to use, then the encryption for the S3 bucket is limited to Amazon S3 managed keys, SSE-S3.
Prerequisites
-
You have installed and configured the latest ROSA command-line interface (CLI) (
rosa) on your installation host. - You have installed and configured the latest Amazon Web Services (AWS) command-line interface (CLI) on your installation host.
Procedure
To enable the log forwarder delivery capability, prepare the IAM policy by creating an
assume-role-policy.jsonfile. Apply the following IAM policy sample:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::859037107838:role/ROSA-CentralLogDistributionRole-241c1a86" }, "Action": "sts:AssumeRole" } ] }To enable the log forwarder distribution capability, create an IAM role that must include the
CustomerLogDistributionname by running the following command:$ aws iam create-role \ --role-name CustomerLogDistribution-RH \ --assume-role-policy-document file://assume-role-policy.json
Next steps
After you create an IAM role and policy, you must decide to send your control plane logs to either a CloudWatch log group, an S3 bucket, or both. See the following summary about CloudWatch and S3 to help you decide what you want to do:
- Use CloudWatch for logs requiring immediate action or organization.
- Use S3 for logs requiring long-term storage or large-scale data analysis.
2.3. Setting up the CloudWatch log group Copy linkLink copied to clipboard!
If you have logs that require immediate action or organization, set up an Amazon CloudWatch log group.
Prerequisites
- You have created an IAM role and policy.
Procedure
Create the CloudWatch log group by running the following command:
$ aws logs create-log-group –log-group-name <your_log_group_name>In your Red Hat OpenShift Service on AWS cluster, configure the log forwarder to use the CloudWatch log group by applying the following JSON sample:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "CreatePutLogs", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "<your_log_group_arn>:*" }, { "Sid": "DescribeLogs", "Effect": "Allow", "Action": [ "logs:DescribeLogGroups", "logs:DescribeLogStreams" ], "Resource": "*" } ] }Attach the policy to the CloudWatch role by running the following command:
$ aws iam put-role-policy \ --role-name CustomerLogDistribution-RH \ --policy-name Allow-CloudWatch-Writes \ --policy-document file://cloudwatch-policy.jsonConfigure your Red Hat OpenShift Service on AWS cluster to forward logs to the CloudWatch log group by applying the following sample YAML list. Specify an application, or group, or both:
cloudwatch: cloudwatch_log_role_arn: "arn:aws:iam::123456789012:role/RosaCloudWatch" cloudwatch_log_group_name: "rosa-logs" applications: - "<example_app1>" groups: - "<example_group1>"where:
- <example_app1>
- Add one or more applications. For a list of applications, see the table in "Determining what log groups to use".
- <example_group1>
-
Add one or more of the following groups:
api,authentication,controller manager,scheduler.
Enable the log forwarder to send logs to your Red Hat OpenShift Service on AWS cluster.
To enable control plane log forwarding on a new cluster, include the log forwarding configuration by running the following command:
$ rosa create cluster --log-fwd-config="<path_to_file>.yaml"To enable control plane log forwarding on an existing cluster, include the log forwarding configuration by running the following command:
$ rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml" -o yaml
Optional: For an example for forwarding logs to the CloudWatch log group, apply the following sample YAML:
cloudwatch: cloudwatch_log_role_arn: "cloudwatch-log-role-arn" cloudwatch_log_group_name: "cloudwatch-group-name" applications: - "<example_app1>" groups: - "<example_group1>"
2.4. Setting up the S3 bucket Copy linkLink copied to clipboard!
If you have logs that need long-term storage or large-scale data analysis, set up an Amazon S3 bucket.
Prerequisites
- If you want to prevent limitations for the managed keys for your S3 bucket, you must have created an IAM role and policy.
Procedure
Create the S3 bucket by running the following command:
$ aws s3api create-bucket \ --bucket <your_s3_bucket_name> \ --region <your_aws_region> \ --create-bucket-configuration LocationConstraint=<cluster_aws_region>Configure the policy for the S3 bucket by applying the following S3 bucket policy sample:
"Version": "2012-10-17", "Statement": [ { "Sid": "AllowCentralLogDistributionWrite", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::859037107838:role/ROSA-CentralLogDistributionRole-241c1a86" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::<your_s3_bucket_name>/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ] }Attach the policy to the S3 role by running the following command:
$ aws s3api put-bucket-policy \ --bucket <your_s3_bucket_name> \ --policy file://s3-bucket-policy.jsonConfigure your Red Hat OpenShift Service on AWS cluster to forward logs to the S3 bucket by applying the following sample YAML list. Specify an application, or group, or both:
s3: s3_config_bucket_name: "my-log-bucket" s3_config_bucket_prefix: "my-bucket-prefix" applications: - "<example_app1>" groups: - "<example_group1>"- <example_app1>
- Add one or more applications. For a list of applications, see the table in "Determining what log groups to use".
- <example_group1>
-
Add one or more of the following groups:
api,authentication,controller manager,scheduler.
Enable the log forwarder to send logs to your Red Hat OpenShift Service on AWS cluster.
To enable control plane log forwarding on a new cluster, include the log forwarding configuration by running the following command:
$ rosa create cluster --log-fwd-config="<path_to_file>.yaml"To enable control plane log forwarding on an existing cluster, include the log forwarding configuration by running the following command:
$ rosa create log-forwarder -c <cluster> --log-fwd-config="<path_to_file>.yaml" -o yaml
Optional: For an example for forwarding logs to the S3 bucket, apply the following sample YAML:
s3: s3_config_bucket_name: "s3-bucket-name" s3_config_bucket_prefix: "s3-bucket-prefix" groups: - "<example_group1>"
2.5. Managing control plane log forwarding Copy linkLink copied to clipboard!
After you configure the Red Hat OpenShift Service on AWS clusters to use your selected log forwarder for control plane logs, see the following commands to run based on your specific needs. For all of these commands, you must provide the clusterid or cluster name in the --cluster flag:
rosa create log-forwarder -c <cluster_name|cluster_id>- Configures your Red Hat OpenShift Service on AWS cluster to use the log forwarder.
rosa list log-forwarder -c <cluster_name|cluster_id>- Displays all of the log forwarder configurations for a Red Hat OpenShift Service on AWS cluster.
rosa describe log-forwarder -c <cluster_name|cluster_id> <log_fwd_id>- Provides additional details for a specific log forwarder.
rosa edit log-forwarder -c <cluster_name|cluster_id> <log_fwd_id>- Changes the following log forwarder fields: groups, applications, and S3 and CloudWatch configurations.
rosa delete log-forwarder -c <cluster_name|cluster_id> <log_fwd_id>Deletes the log forwarder configuration. Logs are no longer forwarded to your chosen destinations but are not automatically deleted. If you no longer want to store your logs in the S3 bucket or CloudWatch group, delete those logs.
Additionally, use this command to change the following log forwarder fields: ID, cluster ID, and the type for S3 and CloudWatch. Delete a log forwarder and re-create it with the updated values.
Legal Notice
Copy linkLink copied to clipboard!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.