Deploying RHEL 9 on Amazon Web Services
Obtaining RHEL system images and creating RHEL instances on AWS
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Introducing RHEL on public cloud platforms Copy linkLink copied to clipboard!
Public cloud platforms offer computing resources as a service. Instead of using on-premise hardware, you can run your IT workloads, including Red Hat Enterprise Linux (RHEL) systems, as public cloud instances.
1.1. Benefits of using RHEL in a public cloud Copy linkLink copied to clipboard!
RHEL as a cloud instance located on a public cloud platform has the following benefits over RHEL on-premises physical systems or virtual machines (VMs):
Flexible and fine-grained allocation of resources
A cloud instance of RHEL runs as a VM on a cloud platform, which typically means a cluster of remote servers maintained by the provider of the cloud service. Therefore, allocating hardware resources to the instance, such as a specific type of CPU or storage, happens on the software level and is easily customizable.
In comparison to a local RHEL system, you are also not limited by the capabilities of your physical host. Instead, you can choose from a variety of features, based on selection offered by the cloud provider.
Space and cost efficiency
You do not need to own any on-premises servers to host your cloud workloads. This avoids the space, power, and maintenance requirements associated with physical hardware.
Instead, on public cloud platforms, you pay the cloud provider directly for using a cloud instance. The cost is typically based on the hardware allocated to the instance and the time you spend using it. Therefore, you can optimize your costs based on your requirements.
Software-controlled configurations
The entire configuration of a cloud instance is saved as data on the cloud platform, and is controlled by software. Therefore, you can easily create, remove, clone, or migrate the instance. A cloud instance is also operated remotely in a cloud provider console and is connected to remote storage by default.
In addition, you can back up the current state of a cloud instance as a snapshot at any time. Afterwards, you can load the snapshot to restore the instance to the saved state.
Separation from the host and software compatibility
Similarly to a local VM, the RHEL guest operating system on a cloud instance runs on a virtualized kernel. This kernel is separate from the host operating system and from the client system that you use to connect to the instance.
Therefore, any operating system can be installed on the cloud instance. This means that on a RHEL public cloud instance, you can run RHEL-specific applications that cannot be used on your local operating system.
In addition, even if the operating system of the instance becomes unstable or is compromised, your client system is not affected in any way.
1.2. Public cloud use cases for RHEL Copy linkLink copied to clipboard!
Deploying on a public cloud provides many benefits, but might not be the most efficient solution in every scenario. If you are evaluating whether to migrate your RHEL deployments to the public cloud, consider whether your use case will benefit from the advantages of the public cloud.
Beneficial use cases
Deploying public cloud instances is very effective for flexibly increasing and decreasing the active computing power of your deployments, also known as scaling up and scaling down. Therefore, using RHEL on public cloud is recommended in the following scenarios:
- Clusters with high peak workloads and low general performance requirements. Scaling up and down based on your demands can be highly efficient in terms of resource costs.
- Quickly setting up or expanding your clusters. This avoids high upfront costs of setting up local servers.
- Cloud instances are not affected by what happens in your local environment. Therefore, you can use them for backup and disaster recovery.
Potentially problematic use cases
- You are running an existing environment that cannot be adjusted. Customizing a cloud instance to fit the specific needs of an existing deployment may not be cost-effective in comparison with your current host platform.
- You are operating with a hard limit on your budget. Maintaining your deployment in a local data center typically provides less flexibility but more control over the maximum resource costs than the public cloud does.
Next steps
1.3. Frequent concerns when migrating to a public cloud Copy linkLink copied to clipboard!
Moving your RHEL workloads from a local environment to a public cloud platform might raise concerns about the changes involved. The following are the most commonly asked questions.
Will my RHEL work differently as a cloud instance than as a local virtual machine?
In most respects, RHEL instances on a public cloud platform work the same as RHEL virtual machines on a local host, such as an on-premises server. Notable exceptions include:
- Instead of private orchestration interfaces, public cloud instances use provider-specific console interfaces for managing your cloud resources.
- Certain features, such as nested virtualization, may not work correctly. If a specific feature is critical for your deployment, check the feature’s compatibility in advance with your chosen public cloud provider.
Will my data stay safe in a public cloud as opposed to a local server?
The data in your RHEL cloud instances is in your ownership, and your public cloud provider does not have any access to it. In addition, major cloud providers support data encryption in transit, which improves the security of data when migrating your virtual machines to the public cloud.
The general security of your RHEL public cloud instances is managed as follows:
- Your public cloud provider is responsible for the security of the cloud hypervisor
- Red Hat provides the security features of the RHEL guest operating systems in your instances
- You manage the specific security settings and practices in your cloud infrastructure
What effect does my geographic region have on the functionality of RHEL public cloud instances?
You can use RHEL instances on a public cloud platform regardless of your geographical location. Therefore, you can run your instances in the same region as your on-premises server.
However, hosting your instances in a physically distant region might cause high latency when operating them. In addition, depending on the public cloud provider, certain regions may provide additional features or be more cost-efficient. Before creating your RHEL instances, review the properties of the hosting regions available for your chosen cloud provider.
1.4. Obtaining RHEL for public cloud deployments Copy linkLink copied to clipboard!
To deploy a RHEL system in a public cloud environment, you need to:
Select the optimal cloud provider for your use case, based on your requirements and the current offer on the market.
The cloud providers currently certified for running RHEL instances are:
- Amazon Web Services (AWS)
- Google Cloud Platform (GCP)
- Note
This document specifically talks about deploying RHEL on AWS.
- Create a RHEL cloud instance on your chosen cloud platform. For more information, see Methods for creating RHEL cloud instances.
- To keep your RHEL deployment up-to-date, use Red Hat Update Infrastructure (RHUI).
1.5. Methods for creating RHEL cloud instances Copy linkLink copied to clipboard!
To deploy a RHEL instance on a public cloud platform, you can use one of the following methods:
| Create a system image of RHEL and import it to the cloud platform.
|
| Purchase a RHEL instance directly from the cloud provider marketplace.
|
For detailed instructions on using various methods to deploy RHEL instances on Amazon Web Services, see the following chapters in this document.
Chapter 2. Creating and uploading AWS AMI images Copy linkLink copied to clipboard!
To use your customized RHEL system image in the Amazon Web Services (AWS) cloud, create the system image with Image Builder by using the respective output type, configure your system for uploading the image, and upload the image to your AWS account.
2.1. Preparing to manually upload AWS AMI images Copy linkLink copied to clipboard!
Before uploading an AWS AMI image, you must configure a system for uploading the images.
Prerequisites
- You must have an Access Key ID configured in the AWS IAM account manager.
- You must have a writable S3 bucket prepared. See Creating S3 bucket.
Procedure
Install Python 3 and the
piptool:dnf install python3 python3-pip
# dnf install python3 python3-pipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the AWS command-line tools with
pip:pip3 install awscli
# pip3 install awscliCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set your profile. The terminal prompts you to provide your credentials, region and output format:
aws configure
$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define a name for your bucket and create a bucket:
BUCKET=bucketname aws s3 mb s3://$BUCKET
$ BUCKET=bucketname $ aws s3 mb s3://$BUCKETCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
bucketnamewith the actual bucket name. It must be a globally unique name. As a result, your bucket is created.To grant permission to access the S3 bucket, create a
vmimportS3 Role in the AWS Identity and Access Management (IAM), if you have not already done so in the past:Create a
trust-policy.jsonfile with the trust policy configuration, in the JSON format. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
role-policy.jsonfile with the role policy configuration, in the JSON format. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a role for your Amazon Web Services account, by using the
trust-policy.jsonfile:aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
$ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Embed an inline policy document, by using the
role-policy.jsonfile:aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json
$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Manually uploading an AMI image to AWS by using the CLI Copy linkLink copied to clipboard!
You can use RHEL image builder to build ami images and manually upload them directly to Amazon AWS Cloud service provider, by using the CLI.
Prerequisites
-
You have an
Access Key IDconfigured in the AWS IAM account manager. - You must have a writable S3 bucket prepared. See Creating S3 bucket.
- You have a defined blueprint.
Procedure
Using the text editor, create a configuration file with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace values in the fields with your credentials for
accessKeyID,secretAccessKey,bucket, andregion. TheIMAGE_KEYvalue is the name of your VM Image to be uploaded to EC2.- Save the file as CONFIGURATION-FILE.toml and close the text editor.
Start the compose to upload it to AWS:
composer-cli compose start blueprint-name image-type image-key configuration-file.toml
# composer-cli compose start blueprint-name image-type image-key configuration-file.tomlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- blueprint-name with the name of the blueprint you created
-
image-type with the
amiimage type. - image-key with the name of your VM Image to be uploaded to EC2.
configuration-file.toml with the name of the configuration file of the cloud provider.
NoteYou must have the correct AWS Identity and Access Management (IAM) settings for the bucket you are going to send your customized image to. You have to set up a policy to your bucket before you are able to upload images to it.
Check the status of the image build:
composer-cli compose status
# composer-cli compose statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the image upload process is complete, you can see the "FINISHED" status.
Verification
To confirm that the image upload was successful:
-
Access EC2 on the menu and select the correct region in the AWS console. The image must have the
availablestatus, to indicate that it was successfully uploaded. - On the dashboard, select your image and click .
2.3. Creating and automatically uploading images to the AWS Cloud AMI Copy linkLink copied to clipboard!
You can create a (.raw) image by using RHEL image builder, and choose to check the Upload to AWS checkbox to automatically push the output image that you create directly to the Amazon AWS Cloud AMI service provider.
Prerequisites
-
You must have
rootorwheelgroup user access to the system. - You have opened the RHEL image builder interface of the RHEL web console in a browser.
- You have created a blueprint. See Creating a blueprint in the web console interface.
- You must have an Access Key ID configured in the AWS IAM account manager.
- You must have a writable S3 bucket prepared.
Procedure
- In the RHEL image builder dashboard, click the blueprint name that you previously created.
- Select the tab .
Click to create your customized image.
The Create Image window opens.
-
From the Type drop-down menu list, select
Amazon Machine Image Disk (.raw). - Check the Upload to AWS checkbox to upload your image to the AWS Cloud and click .
To authenticate your access to AWS, type your
AWS access key IDandAWS secret access keyin the corresponding fields. Click .NoteYou can view your AWS secret access key only when you create a new Access Key ID. If you do not know your Secret Key, generate a new Access Key ID.
-
Type the name of the image in the
Image namefield, type the Amazon bucket name in theAmazon S3 bucket namefield and type theAWS regionfield for the bucket you are going to add your customized image to. Click . Review the information and click .
Optionally, click to modify any incorrect detail.
NoteYou must have the correct IAM settings for the bucket you are going to send your customized image. This procedure uses the IAM Import and Export, so you have to set up a policy to your bucket before you are able to upload images to it. For more information, see Required Permissions for IAM Users.
-
From the Type drop-down menu list, select
A pop-up on the upper right informs you of the saving progress. It also informs that the image creation has been initiated, the progress of this image creation and the subsequent upload to the AWS Cloud.
After the process is complete, you can see the Image build complete status.
In a browser, access Service→EC2.
-
On the AWS console dashboard menu, choose the correct region. The image must have the
Availablestatus, to indicate that it is uploaded. - On the AWS dashboard, select your image and click .
-
On the AWS console dashboard menu, choose the correct region. The image must have the
- A new window opens. Choose an instance type according to the resources you need to start your image. Click .
- Review your instance start details. You can edit each section if you need to make any changes. Click
Before you start the instance, select a public key to access it.
You can either use the key pair you already have or you can create a new key pair.
Follow the next steps to create a new key pair in EC2 and attach it to the new instance.
- From the drop-down menu list, select Create a new key pair.
- Enter the name to the new key pair. It generates a new key pair.
- Click Download Key Pair to save the new key pair on your local system.
Then, you can click to start your instance.
You can check the status of the instance, which displays as Initializing.
- After the instance status is running, the button becomes available.
Click . A window appears with instructions on how to connect by using SSH.
- Select A standalone SSH client as the preferred connection method to and open a terminal.
In the location you store your private key, ensure that your key is publicly viewable for SSH to work. To do so, run the command:
chmod 400 <_your-instance-name.pem_>
$ chmod 400 <_your-instance-name.pem_>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to your instance by using its Public DNS:
ssh -i <_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>
$ ssh -i <_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Type
yesto confirm that you want to continue connecting.As a result, you are connected to your instance over SSH.
Verification
- Check if you are able to perform any action while connected to your instance by using SSH.
Chapter 3. Deploying a Red Hat Enterprise Linux image as an EC2 instance on Amazon Web Services Copy linkLink copied to clipboard!
To set up a High Availability (HA) deployment of RHEL on Amazon Web Services (AWS), you can deploy EC2 instances of RHEL to a cluster on AWS.
While you can create a custom VM from an ISO image, Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. With Image Builder, you can create and upload an Amazon Machine Image (AMI) in the ami format. See Composing a Customized RHEL System Image for more information.
For a list of Red Hat products that you can use securely on AWS, see Red Hat on Amazon Web Services.
Prerequisites
- Sign up for a Red Hat Customer Portal account.
- Sign up for AWS and set up your AWS resources. See Setting Up with Amazon EC2 for more information.
3.1. Red Hat Enterprise Linux image options on AWS Copy linkLink copied to clipboard!
The following table lists image choices and notes the differences in the image options.
| Image option | Subscriptions | Sample scenario | Considerations |
|---|---|---|---|
| Deploy a Red Hat Gold Image. | Use your existing Red Hat subscriptions. | Select a Red Hat Gold Image on AWS. For details on Gold Images and how to access them on Azure, see the Red Hat Cloud Access Reference Guide. | The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for Cloud Access images. |
| Deploy a custom image that you move to AWS. | Use your existing Red Hat subscriptions. | Upload your custom image, and attach your subscriptions. | The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for custom RHEL images. |
| Deploy an existing Amazon image that includes RHEL. | The AWS EC2 images include a Red Hat product. | Select a RHEL image when you launch an instance on the AWS Management Console, or choose an image from the AWS Marketplace. | You pay Amazon hourly on a pay-as-you-go model. Such images are called "on-demand" images. Amazon provides support for on-demand images. Red Hat provides updates to the images. AWS makes the updates available through the Red Hat Update Infrastructure (RHUI). |
To convert an on-demand, license-included EC2 instance to a bring-your-own-license (BYOL) EC2 instance of RHEL, see Convert a license type for Linux in License Manager.
You can create a custom image for AWS by using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information.
3.2. Understanding base images Copy linkLink copied to clipboard!
To create a base VM from an ISO image, you can use preconfigured base images and their configuration settings.
3.2.1. Using a custom base image Copy linkLink copied to clipboard!
To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image.
3.2.2. Virtual machine configuration settings Copy linkLink copied to clipboard!
Cloud VMs must have the following configuration settings.
| Setting | Recommendation |
|---|---|
| ssh | ssh must be enabled to provide remote access to your VMs. |
| dhcp | The primary virtual adapter should be configured for dhcp. |
3.3. Creating a base VM from an ISO image Copy linkLink copied to clipboard!
To create a RHEL 9 base image from an ISO image, enable your host machine for virtualization and create a RHEL virtual machine (VM).
Prerequisites
- Virtualization is enabled on your host machine.
-
You have downloaded the latest Red Hat Enterprise Linux ISO image from the Red Hat Customer Portal and moved the image to
/var/lib/libvirt/images.
3.3.1. Creating a VM from the RHEL ISO image Copy linkLink copied to clipboard!
Procedure
- Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 9 for information and procedures.
Create and start a basic Red Hat Enterprise Linux VM. For instructions, see Creating virtual machines.
If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.
For example, the following command creates a
kvmtestVM by using the/home/username/Downloads/rhel9.isoimage:virt-install \ --name kvmtest --memory 2048 --vcpus 2 \ --cdrom /home/username/Downloads/rhel9.iso,bus=virtio \ --os-variant=rhel9.0# virt-install \ --name kvmtest --memory 2048 --vcpus 2 \ --cdrom /home/username/Downloads/rhel9.iso,bus=virtio \ --os-variant=rhel9.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use the web console to create your VM, follow the procedure in Creating virtual machines by using the web console, with these caveats:
- Do not check Immediately Start VM.
- Change your Memory size to your preferred settings.
- Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
3.3.2. Completing the RHEL installation Copy linkLink copied to clipboard!
To finish the installation of a RHEL system that you want to deploy on Amazon Web Services (AWS), customize the Installation Summary view, begin the installation, and enable root access once the VM launches.
Procedure
- Choose the language you want to use during the installation process.
On the Installation Summary view:
- Click Software Selection and check Minimal Install.
- Click Done.
Click Installation Destination and check Custom under Storage Configuration.
-
Verify at least 500 MB for
/boot. You can use the remaining space for root/. - Standard partitions are recommended, but you can use Logical Volume Manager (LVM).
- You can use xfs, ext4, or ext3 for the file system.
- Click Done when you are finished with changes.
-
Verify at least 500 MB for
- Click Begin Installation.
- Set a Root Password. Create other users as applicable.
-
Reboot the VM and log in as
rootonce the installation completes. Configure the image.
Register the VM and enable the Red Hat Enterprise Linux 9 repository.
subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
cloud-initpackage is installed and enabled.dnf install cloud-init systemctl enable --now cloud-init.service
# dnf install cloud-init # systemctl enable --now cloud-init.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Important: This step is only for VMs you intend to upload to AWS.
For AMD64 or Intel 64 (x86_64)VMs, install the
nvme,xen-netfront, andxen-blkfrontdrivers.dracut -f --add-drivers "nvme xen-netfront xen-blkfront"
# dracut -f --add-drivers "nvme xen-netfront xen-blkfront"Copy to Clipboard Copied! Toggle word wrap Toggle overflow For ARM 64 (aarch64) VMs, install the
nvmedriver.dracut -f --add-drivers "nvme"
# dracut -f --add-drivers "nvme"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Including these drivers removes the possibility of a dracut time-out.
Alternatively, you can add the drivers to
/etc/dracut.conf.d/and then enterdracut -fto overwrite the existinginitramfsfile.
- Power off the VM.
3.4. Uploading the Red Hat Enterprise Linux image to AWS Copy linkLink copied to clipboard!
To be able to run a RHEL instance on Amazon Web Services (AWS), you must first upload your RHEL image to AWS.
3.4.1. Installing the AWS CLI Copy linkLink copied to clipboard!
Many of the procedures required to manage HA clusters in AWS include using the AWS CLI.
Prerequisites
- You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI.
Procedure
Install the AWS command line tools by using the
dnfcommand.dnf install awscli
# dnf install awscliCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
aws --versioncommand to verify that you installed the AWS CLI.aws --version
$ aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the AWS command line client according to your AWS access details.
aws configure
$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.2. Creating an S3 bucket Copy linkLink copied to clipboard!
Importing to AWS requires an Amazon S3 bucket. An Amazon S3 bucket is an Amazon resource where you store objects. As part of the process for uploading your image, you need to create an S3 bucket and then move your image to the bucket.
Procedure
- Launch the Amazon S3 Console.
- Click Create Bucket. The Create Bucket dialog appears.
In the Name and region view:
- Enter a Bucket name.
- Enter a Region.
- Click Next.
- In the Configure options view, select the desired options and click Next.
- In the Set permissions view, change or accept the default options and click Next.
- Review your bucket configuration.
Click Create bucket.
NoteAlternatively, you can use the AWS CLI to create a bucket. For example, the
aws s3 mb s3://my-new-bucketcommand creates an S3 bucket namedmy-new-bucket. See the AWS CLI Command Reference for more information about thembcommand.
3.4.3. Creating the vmimport role Copy linkLink copied to clipboard!
To be able to import a RHEL virtual machine (VM) to Amazon Web Services (AWS) by using the VM Import service, you need to create the vmimport role.
For more information, see Importing a VM as an image using VM Import/Export in the Amazon documentation.
Procedure
Create a file named
trust-policy.jsonand include the following policy. Save the file on your system and note its location.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
create rolecommand to create thevmimportrole. Specify the full path to the location of thetrust-policy.jsonfile. Prefixfile://to the path. For example:aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.json
$ aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file named
role-policy.jsonand include the following policy. Replaces3-bucket-namewith the name of your S3 bucket.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
put-role-policycommand to attach the policy to the role you created. Specify the full path of therole-policy.jsonfile. For example:aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.json
$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.4. Converting and pushing your image to S3 Copy linkLink copied to clipboard!
By using the qemu-img command, you can convert your image, so that you can push it to S3. The samples are representative; they convert an image formatted in the qcow2 file format to raw format. Amazon accepts images in OVA, VHD, VHDX, VMDK, and raw formats. See How VM Import/Export Works for more information about image formats that Amazon accepts.
Procedure
Run the
qemu-imgcommand to convert your image. For example:qemu-img convert -f qcow2 -O raw rhel-9.0-sample.qcow2 rhel-9.0-sample.raw
# qemu-img convert -f qcow2 -O raw rhel-9.0-sample.qcow2 rhel-9.0-sample.rawCopy to Clipboard Copied! Toggle word wrap Toggle overflow Push the image to S3.
aws s3 cp rhel-9.0-sample.raw s3://s3-bucket-name
$ aws s3 cp rhel-9.0-sample.raw s3://s3-bucket-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis procedure could take a few minutes. After completion, you can check that your image uploaded successfully to your S3 bucket by using the AWS S3 Console.
3.4.5. Importing your image as a snapshot Copy linkLink copied to clipboard!
To launch a RHEL instance in the Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you must first upload a snapshot of your RHEL system image to EC2.
Procedure
Create a file to specify a bucket and path for your image. Name the file
containers.json. In the sample that follows, replaces3-bucket-namewith your bucket name ands3-keywith your key. You can get the key for the image by using the Amazon S3 Console.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import the image as a snapshot. This example uses a public Amazon S3 file; you can use the Amazon S3 Console to change permissions settings on your bucket.
aws ec2 import-snapshot --disk-container file://containers.json
$ aws ec2 import-snapshot --disk-container file://containers.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow The terminal displays a message such as the following. Note the
ImportTaskIDwithin the message.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Track the progress of the import by using the
describe-import-snapshot-taskscommand. Include theImportTaskID.aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8
$ aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8Copy to Clipboard Copied! Toggle word wrap Toggle overflow The returned message shows the current status of the task. When complete,
Statusshowscompleted. Within the status, note the snapshot ID.
3.4.6. Creating an AMI from the uploaded snapshot Copy linkLink copied to clipboard!
To launch a RHEL instance in Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you can use a RHEL system snapshot that you previously uploaded.
Procedure
- Go to the AWS EC2 Dashboard.
- Under Elastic Block Store, select Snapshots.
-
Search for your snapshot ID (for example,
snap-0e718930bd72bcda0). - Right-click on the snapshot and select Create image.
- Name your image.
- Under Virtualization type, choose Hardware-assisted virtualization.
- Click Create. In the note regarding image creation, there is a link to your image.
Click on the image link. Your image shows up under Images>AMIs.
NoteAlternatively, you can use the AWS CLI
register-imagecommand to create an AMI from a snapshot. See register-image for more information. An example follows.aws ec2 register-image \ --name "myimagename" --description "myimagedescription" --architecture x86_64 \ --virtualization-type hvm --root-device-name "/dev/sda1" --ena-support \ --block-device-mappings "{\"DeviceName\": \"/dev/sda1\",\"Ebs\": {\"SnapshotId\": \"snap-0ce7f009b69ab274d\"}}"$ aws ec2 register-image \ --name "myimagename" --description "myimagedescription" --architecture x86_64 \ --virtualization-type hvm --root-device-name "/dev/sda1" --ena-support \ --block-device-mappings "{\"DeviceName\": \"/dev/sda1\",\"Ebs\": {\"SnapshotId\": \"snap-0ce7f009b69ab274d\"}}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must specify the root device volume
/dev/sda1as yourroot-device-name. For conceptual information about device mapping for AWS, see Example block device mapping.
3.4.7. Launching an instance from the AMI Copy linkLink copied to clipboard!
To launch and configure an Amazon Elastic Compute Cloud (EC2) instance, use an Amazon Machine Image (AMI).
Procedure
- From the AWS EC2 Dashboard, select Images and then AMIs.
- Right-click on your image and select Launch.
Choose an Instance Type that meets or exceeds the requirements of your workload.
See Amazon EC2 Instance Types for information about instance types.
Click Next: Configure Instance Details.
- Enter the Number of instances you want to create.
- For Network, select the VPC you created when setting up your AWS environment. Select a subnet for the instance or create a new subnet.
Select Enable for Auto-assign Public IP.
NoteThese are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements.
- Click Next: Add Storage. Verify that the default storage is sufficient.
Click Next: Add Tags.
NoteTags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging.
- Click Next: Configure Security Group. Select the security group you created when setting up your AWS environment.
- Click Review and Launch. Verify your selections.
Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when setting up your AWS environment.
NoteVerify that the permissions for your private key are correct. Use the command options
chmod 400 <keyname>.pemto change the permissions, if necessary.- Click Launch Instances.
Click View Instances. You can name the instance(s).
You can now launch an SSH session to your instance(s) by selecting an instance and clicking Connect. Use the example provided for A standalone SSH client.
NoteAlternatively, you can launch an instance by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.
3.4.8. Attaching Red Hat subscriptions Copy linkLink copied to clipboard!
Using the subscription-manager command, you can register and attach your Red Hat subscription to a RHEL instance.
Prerequisites
- You must have enabled your subscriptions.
Procedure
Register your system.
subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attach your subscriptions.
- You can use an activation key to attach subscriptions. See Creating Red Hat Customer Portal Activation Keys for more information.
- Also, you can manually attach a subscription by using the ID of subscription pool (Pool ID). See Attaching a host-based subscription to hypervisors.
Optional: To collect various system metrics about the instance in the Red Hat Hybrid Cloud Console, you can register the instance with Red Hat Insights.
insights-client register --display-name <display_name_value>
# insights-client register --display-name <display_name_value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For information about further configuration of Red Hat Insights, see Client Configuration Guide for Red Hat Insights.
3.4.9. Setting up automatic registration on AWS Gold Images Copy linkLink copied to clipboard!
To make deploying RHEL 9 virtual machines on Amazon Web Services (AWS) faster and more comfortable, you can set up Gold Images of RHEL 9 to be automatically registered to the Red Hat Subscription Manager (RHSM).
Prerequisites
You have downloaded the latest RHEL 9 Gold Image for AWS. For instructions, see Using Gold Images on AWS.
NoteAn AWS account can only be attached to a single Red Hat account at a time. Therefore, ensure no other users require access to the AWS account before attaching it to your Red Hat one.
Procedure
- Upload the Gold Image to AWS. For instructions, see Uploading the Red Hat Enterprise Linux image to AWS.
- Create VMs by using the uploaded image. They will be automatically subscribed to RHSM.
Verification
In a RHEL 9 VM created using the above instructions, verify the system is registered to RHSM by executing the
subscription-manager identitycommand. On a successfully registered system, this displays the UUID of the system. For example:subscription-manager identity
# subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Configuring a Red Hat High Availability cluster on AWS Copy linkLink copied to clipboard!
To create a cluster where RHEL nodes automatically redistribute their workloads if a node failure occurs, use the Red Hat High Availability Add-On. Such high availability (HA) clusters can also be hosted on public cloud platforms, including AWS. Creating RHEL HA clusters on AWS is similar to creating HA clusters in non-cloud environments.
To configure a Red Hat HA cluster on Amazon Web Services (AWS) using EC2 instances as cluster nodes, see the following sections. Note that you have several options for obtaining the Red Hat Enterprise Linux (RHEL) images you use for your cluster. For information on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS. Before you begin, ensure that you have completed the following prerequisites:
- Sign up for a Red Hat Customer Portal account.
- Sign up for AWS and set up your AWS resources. See Setting Up with Amazon EC2 for more information.
4.1. The benefits of using high-availability clusters on public cloud platforms Copy linkLink copied to clipboard!
A high-availability (HA) cluster is a set of computers (called nodes) that are linked together to run a specific workload. The purpose of HA clusters is to provide redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes and no noticeable downtime occurs in the services that are running on the cluster.
You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:
- Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
- Scalability: Additional nodes can be started when demand is high and stopped when demand is low.
- Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
- Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.
To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.
4.2. Creating the AWS Access Key and AWS Secret Access Key Copy linkLink copied to clipboard!
You need to create an AWS Access Key and AWS Secret Access Key before you install the AWS CLI. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster.
Prerequisites
- Your IAM user account must have Programmatic access. See Setting up the AWS Environment for more information.
Procedure
- Launch the AWS Console.
- Click on your AWS Account ID to display the drop-down menu and select My Security Credentials.
- Click Users.
- Select the user and open the Summary screen.
- Click the Security credentials tab.
- Click Create access key.
-
Download the
.csvfile (or save both keys). You need to enter these keys when creating the fencing device.
4.3. Installing the AWS CLI Copy linkLink copied to clipboard!
Many of the procedures required to manage HA clusters in AWS include using the AWS CLI.
Prerequisites
- You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI.
Procedure
Install the AWS command line tools by using the
dnfcommand.dnf install awscli
# dnf install awscliCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
aws --versioncommand to verify that you installed the AWS CLI.aws --version
$ aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the AWS command line client according to your AWS access details.
aws configure
$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Creating an HA EC2 instance Copy linkLink copied to clipboard!
Complete the following steps to create the instances that you use as your HA cluster nodes. Note that you have a number of options for obtaining the RHEL images you use for your cluster. See Red Hat Enterprise Linux Image options on AWS for information about image options for AWS.
You can create and upload a custom image that you use for your cluster nodes, or you can use a Gold Image or an on-demand image.
Prerequisites
- You have set up an AWS environment. For more information, see Setting Up with Amazon EC2.
Procedure
- From the AWS EC2 Dashboard, select Images and then AMIs.
- Right-click on your image and select Launch.
Choose an Instance Type that meets or exceeds the requirements of your workload. Depending on your HA application, each instance may need to have higher capacity.
See Amazon EC2 Instance Types for information about instance types.
Click Next: Configure Instance Details.
Enter the Number of instances you want to create for the cluster. This example procedure uses three cluster nodes.
NoteDo not launch into an Auto Scaling Group.
- For Network, select the VPC you created in Set up the AWS environment. Select the subnet for the instance to create a new subnet.
Select Enable for Auto-assign Public IP. These are the minimum selections you need to make for Configure Instance Details. Depending on your specific HA application, you may need to make additional selections.
NoteThese are the minimum configuration options necessary to create a basic instance. Review additional options based on your HA application requirements.
- Click Next: Add Storage and verify that the default storage is sufficient. You do not need to modify these settings unless your HA application requires other storage options.
Click Next: Add Tags.
NoteTags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging.
- Click Next: Configure Security Group. Select the existing security group you created in Setting up the AWS environment.
- Click Review and Launch and verify your selections.
- Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when Setting up the AWS environment.
- Click Launch Instances.
Click View Instances. You can name the instance(s).
NoteAlternatively, you can launch instances by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.
4.5. Configuring the private key Copy linkLink copied to clipboard!
Complete the following configuration tasks to use the private SSH key file (.pem) before it can be used in an SSH session.
Procedure
-
Move the key file from the
Downloadsdirectory to yourHomedirectory or to your~/.ssh directory. Change the permissions of the key file so that only the root user can read it.
chmod 400 KeyName.pem
# chmod 400 KeyName.pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Connecting to an EC2 instance Copy linkLink copied to clipboard!
Using the AWS Console on all nodes, you can connect to an EC2 instance.
Procedure
- Launch the AWS Console and select the EC2 instance.
- Click Connect and select A standalone SSH client.
-
From your SSH terminal session, connect to the instance by using the AWS example provided in the pop-up window. Add the correct path to your
KeyName.pemfile if the path is not shown in the example.
4.7. Installing the High Availability packages and agents Copy linkLink copied to clipboard!
On each of the nodes, you need to install the High Availability packages and agents to be able to configure a Red Hat High Availability cluster on AWS.
Procedure
Remove the AWS Red Hat Update Infrastructure (RHUI) client.
sudo -i dnf -y remove rh-amazon-rhui-client*
$ sudo -i # dnf -y remove rh-amazon-rhui-client*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the VM with Red Hat.
subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable all repositories.
subscription-manager repos --disable=*
# subscription-manager repos --disable=*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the RHEL 9 Server HA repositories.
subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms
# subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the RHEL AWS instance.
dnf update -y
# dnf update -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the Red Hat High Availability Add-On software packages, along with the AWS fencing agent from the High Availability channel.
dnf install pcs pacemaker fence-agents-aws
# dnf install pcs pacemaker fence-agents-awsCopy to Clipboard Copied! Toggle word wrap Toggle overflow The user
haclusterwas created during thepcsandpacemakerinstallation in the previous step. Create a password forhaclusteron all cluster nodes. Use the same password for all nodes.passwd hacluster
# passwd haclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
high availabilityservice to the RHEL Firewall iffirewalld.serviceis installed.firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload
# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
pcsservice and enable it to start on boot.systemctl start pcsd.service systemctl enable pcsd.service
# systemctl start pcsd.service # systemctl enable pcsd.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit
/etc/hostsand add RHEL host names and internal IP addresses. For more information, see the Red Hat Knowledgebase solution How should the /etc/hosts file be set up on RHEL cluster nodes?.
Verification
Ensure the
pcsservice is running.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8. Creating a cluster Copy linkLink copied to clipboard!
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
hacluster. In the command, specify the name of each node in the cluster.pcs host auth <hostname1> <hostname2> <hostname3>
# pcs host auth <hostname1> <hostname2> <hostname3>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the cluster.
pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>
# pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enable the cluster.
pcs cluster enable --all
[root@node01 clouduser]# pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster EnabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the cluster.
pcs cluster start --all
[root@node01 clouduser]# pcs cluster start --all node02: Starting Cluster... node03: Starting Cluster... node01: Starting Cluster...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9. Configuring fencing Copy linkLink copied to clipboard!
Fencing configuration ensures that a malfunctioning node on your AWS cluster is automatically isolated, which prevents the node from consuming the cluster’s resources or compromising the cluster’s functionality.
To configure fencing on an AWS cluster, you can use multiple methods:
- A standard procedure for default configuration.
- An alternate configuration procedure for more advanced configuration, focused on automation.
Prerequisites
-
You must be using the
fence_awsfencing agent. To obtainfence_aws, install theresource-agentspackage on your cluster.
Standard procedure
Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information.
echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6
[root@ip-10-0-0-48 ~]# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to configure the fence device. Use the
pcmk_host_mapcommand to map the RHEL host name to the Instance ID. Use the AWS Access Key and AWS Secret Access Key that you previously set up.pcs stonith \ create <name> fence_aws access_key=access-key secret_key=<secret-access-key> \ region=<region> pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4# pcs stonith \ create <name> fence_aws access_key=access-key secret_key=<secret-access-key> \ region=<region> pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs stonith \ create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ \ region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4
[root@ip-10-0-0-48 ~]# pcs stonith \ create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ \ region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device.
Alternate procedure
Obtain the VPC ID of the cluster.
aws ec2 describe-vpcs --output text --filters "Name=tag:Name,Values=<clustername>-vpc" --query 'Vpcs[*].VpcId'
# aws ec2 describe-vpcs --output text --filters "Name=tag:Name,Values=<clustername>-vpc" --query 'Vpcs[*].VpcId' vpc-06bc10ac8f6006664Copy to Clipboard Copied! Toggle word wrap Toggle overflow By using the VPC ID of the cluster, obtain the VPC instances.
aws ec2 describe-instances --output text --filters "Name=vpc-id,Values=vpc-06bc10ac8f6006664" --query 'Reservations[*].Instances[*].{Name:Tags[? Key==Name]|[0].Value,Instance:InstanceId}' | grep "\-node[a-c]"$ aws ec2 describe-instances --output text --filters "Name=vpc-id,Values=vpc-06bc10ac8f6006664" --query 'Reservations[*].Instances[*].{Name:Tags[? Key==Name]|[0].Value,Instance:InstanceId}' | grep "\-node[a-c]" i-0b02af8927a895137 <clustername>-nodea-vm i-0cceb4ba8ab743b69 <clustername>-nodeb-vm i-0502291ab38c762a5 <clustername>-nodec-vmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the obtained instance IDs to configure fencing on each node on the cluster. For example, to configure a fencing device on all nodes in a cluster:
CLUSTER=<clustername> && pcs stonith create fence${CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=$(for NODE \ in node{a..c}; do ssh ${NODE} "echo -n \${HOSTNAME}:\$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\;"; done) \ pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX[root@nodea ~]# CLUSTER=<clustername> && pcs stonith create fence${CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=$(for NODE \ in node{a..c}; do ssh ${NODE} "echo -n \${HOSTNAME}:\$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\;"; done) \ pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXCopy to Clipboard Copied! Toggle word wrap Toggle overflow For information about specific parameters for creating fencing devices, see the
fence_awsman page or the Configuring and managing high availability clusters guide.- To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device.
Verification
Display the configured fencing devices and their parameters on your nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test the fencing agent for one of the cluster nodes.
pcs stonith fence <awsnodename>
# pcs stonith fence <awsnodename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe command response may take several minutes to display. If you watch the active terminal session for the node being fenced, you see that the terminal connection is immediately terminated after you enter the fence command.
Example:
pcs stonith fence ip-10-0-0-58
[root@ip-10-0-0-48 ~]# pcs stonith fence ip-10-0-0-58 Node: ip-10-0-0-58 fencedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status to verify that the node is fenced.
pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the node that was fenced in the previous step.
pcs cluster start <awshostname>
# pcs cluster start <awshostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status to verify the node started.
pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.10. Installing the AWS CLI on cluster nodes Copy linkLink copied to clipboard!
Previously, you installed the AWS CLI on your host system. You need to install the AWS CLI on cluster nodes before you configure the network resource agents.
Complete the following procedure on each cluster node.
Prerequisites
- You must have created an AWS Access Key and AWS Secret Access Key. See Creating the AWS Access Key and AWS Secret Access Key for more information.
Procedure
- Install the AWS CLI. For instructions, see Installing the AWS CLI.
Verify that the AWS CLI is configured properly. The instance IDs and instance names should display.
Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11. Setting up IP address resources on AWS Copy linkLink copied to clipboard!
To ensure that clients that use IP addresses to access resources managed by the cluster over the network can access the resources if a failover occurs, the cluster must include IP address resources, which use specific network resource agents.
The RHEL HA Add-On provides a set of resource agents, which create IP address resources to manage various types of IP addresses on AWS. To decide which resource agent to configure, consider the type of AWS IP addresses that you want the HA cluster to manage:
-
If you need to manage an IP address exposed to the internet, use the
awseipnetwork resource. -
If you need to manage a private IP address limited to a single AWS Availability Zone (AZ), use the
awsvipandIPaddr2network resources. -
If you need to manage an IP address that can move across multiple AWS AZs within the same AWS region, use the
aws-vpc-move-ipnetwork resource.
If the HA cluster does not manage any IP addresses, the resource agents for managing virtual IP addresses on AWS are not required. If you need further guidance for your specific deployment, consult with your AWS provider.
4.11.1. Creating an IP address resource to manage an IP address exposed to the internet Copy linkLink copied to clipboard!
To ensure that high-availability (HA) clients can access a RHEL 9 node that uses public-facing internet connections, configure an AWS Secondary Elastic IP Address (awseip) resource to use an elastic IP address.
Prerequisites
- You have a previously configured cluster.
- Your cluster nodes must have access to the RHEL HA repositories. For more information, see Installing the High Availability packages and agents.
- You have set up the AWS CLI. For instructions, see Installing the AWS CLI.
Procedure
Install the
resource-agents-cloudpackage.dnf install resource-agents-cloud
# dnf install resource-agents-cloudCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using the AWS command-line interface (CLI), create an elastic IP address.
aws ec2 allocate-address --domain vpc --output text
[root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text eipalloc-4c4a2c45 vpc 35.169.153.122Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Display the description of
awseip. This shows the options and default operations for this agent.pcs resource describe awseip
# pcs resource describe awseipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Secondary Elastic IP address resource that uses the allocated IP address that you previously specified using the AWS CLI. In addition, create a resource group that the Secondary Elastic IP address will belong to.
pcs resource create <resource-id> awseip elastic_ip=<Elastic-IP-Address> allocation_id=<Elastic-IP-Association-ID> --group networking-group
# pcs resource create <resource-id> awseip elastic_ip=<Elastic-IP-Address> allocation_id=<Elastic-IP-Association-ID> --group networking-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group
# pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the status of the cluster to verify that the required resources are running.
pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following output shows an example running cluster where the
vipandelasticresources have been started as a part of thenetworking-groupresource group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Launch an SSH session from your local workstation to the elastic IP address that you previously created.
ssh -l <user-name> -i ~/.ssh/<KeyName>.pem <elastic-IP>
$ ssh -l <user-name> -i ~/.ssh/<KeyName>.pem <elastic-IP>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122
$ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the host to which you connected via SSH is the host associated with the elastic resource created.
4.11.2. Creating an IP address resource to manage a private IP address limited to a single AWS Availability Zone Copy linkLink copied to clipboard!
To ensure that high-availability (HA) clients on AWS can access a RHEL 9 node that uses a a private IP address that can only move in a single AWS Availability Zone (AZ), configure an AWS Secondary Private IP Address (awsvip) resource to use a virtual IP address.
You can complete the following procedure on any node in the cluster.
Prerequisites
- You have a previously configured cluster.
- Your cluster nodes have access to the RHEL HA repositories. For more information, see Installing the High Availability packages and agents.
- You have set up the AWS CLI. For instructions, see Installing the AWS CLI.
Procedure
Install the
resource-agents-cloudpackage.dnf install resource-agents-cloud
# dnf install resource-agents-cloudCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: View the
awsvipdescription. This shows the options and default operations for this agent.pcs resource describe awsvip
# pcs resource describe awsvipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Secondary Private IP address with an unused private IP address in the
VPC CIDRblock. In addition, create a resource group that the Secondary Private IP address will belong to.pcs resource create <resource-id> awsvip secondary_private_ip=<Unused-IP-Address> --group <group-name>
# pcs resource create <resource-id> awsvip secondary_private_ip=<Unused-IP-Address> --group <group-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group
[root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet. Ensure that the virtual IP belongs to the same resource group as the Secondary Private IP address you created in the previous step.
pcs resource create <resource-id> IPaddr2 ip=<secondary-private-IP> --group <group-name>
# pcs resource create <resource-id> IPaddr2 ip=<secondary-private-IP> --group <group-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group
root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the status of the cluster to verify that the required resources are running.
pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following output shows an example running cluster where the
vipandprivipresources have been started as a part of thenetworking-groupresource group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11.3. Creating an IP address resource to manage an IP address that can move across multiple AWS Availability Zones Copy linkLink copied to clipboard!
To ensure that high-availability (HA) clients on AWS can access a RHEL 9 node that can be moved across multiple AWS Availability Zones within the same AWS region, configure an aws-vpc-move-ip resource to use an elastic IP address.
Prerequisites
- You have a previously configured cluster.
- Your cluster nodes have access to the RHEL HA repositories. For more information, see Installing the High Availability packages and agents.
- You have set up the AWS CLI. For instructions, see Installing the AWS CLI.
An Identity and Access Management (IAM) user is configured on your cluster and has the following permissions:
- Modify routing tables
- Create security groups
- Create IAM policies and roles
Procedure
Install the
resource-agents-cloudpackage.dnf install resource-agents-cloud
# dnf install resource-agents-cloudCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: View the
aws-vpc-move-ipdescription. This shows the options and default operations for this agent.pcs resource describe aws-vpc-move-ip
# pcs resource describe aws-vpc-move-ipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set up an
OverlayIPAgentIAM policy for the IAM user.-
In the AWS console, navigate to Services → IAM → Policies → Create
OverlayIPAgentPolicy Input the following configuration, and change the <region>, <account-id>, and <ClusterRouteTableID> values to correspond with your cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
In the AWS console, navigate to Services → IAM → Policies → Create
In the AWS console, disable the
Source/Destination Checkfunction on all nodes in the cluster.To do this, right-click each node → Networking → Change Source/Destination Checks. In the pop-up message that appears, click Yes, Disable.
Create a route table for the cluster. To do so, use the following command on one node in the cluster:
aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>
# aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the command, replace values as follows:
-
ClusterRouteTableID: The route table ID for the existing cluster VPC route table. -
NewCIDRblockIP/NetMask: A new IP address and netmask outside of the VPC classless inter-domain routing (CIDR) block. For example, if the VPC CIDR block is172.31.0.0/16, the new IP address/netmask can be192.168.0.15/32. -
ClusterNodeID: The instance ID for another node in the cluster.
-
On one of the nodes in the cluster, create a
aws-vpc-move-ipresource that uses a free IP address that is accessible to the client. The following example creates a resource namedvpcipthat uses IP192.168.0.15.pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>
# pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow On all nodes in the cluster, edit the
/etc/hosts/file, and add a line with the IP address of the newly created resource. For example:192.168.0.15 vpcip
192.168.0.15 vpcipCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Test the failover ability of the new
aws-vpc-move-ipresource:pcs resource move vpcip
# pcs resource move vpcipCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the failover succeeded, remove the automatically created constraint after the move of the
vpcipresource:pcs resource clear vpcip
# pcs resource clear vpcipCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Configuring the OpenTelemetry Collector for RHEL on public cloud platforms Copy linkLink copied to clipboard!
When running RHEL on Amazon Web Services (AWS), you can use the OpenTelemetry (OTel) framework to maintain and debug your RHEL instances.
RHEL includes the OTel Collector service, which you can use to manage logs. The OTel Collector gathers, processes, transforms, and exports logs to and from various formats and external back ends. You can also use the OTel Collector to aggregate the collected data and generate metrics useful for analytics services.
5.1. How the OpenTelemetry Collector works Copy linkLink copied to clipboard!
For RHEL on AWS, you can configure the OTel Collector service to receive, process, and export logs between the RHEL instance and the AWS telemetry analytics service to automatically manage telemetry data on your RHEL instance. The OTel Collector is a component of the OTel ecosystem, and has three stages in its workflow: a receiver, a processor, and an exporter.
You can configure the workflow for any of these components in a YAML file based on your specific use case. Typically, the OTel Collector works as follows:
- A receiver collects telemetry data from data sources, such as applications and services.
- After the receiver ingest data, it passes to a processing phase, in which a chain of processors may be defined to transform the data.
- The exporter sends the telemetry data to the required destination.
5.2. Integration of OpenTelemetry with AWS CloudWatch Logs Copy linkLink copied to clipboard!
Integrating OTel with Amazon Web Services (AWS) for log management involves configuring the OTel Collector to use RHEL on AWS as exporter for logs. It works as follows:
- Configuring the exporter for the OTel Collector
- Enabling log connections
- Exporting data from the RHEL instance to AWS CloudWatch logs.
As a result, you can gather log data from various sources at a single location to effectively manage log analysis.
From the available features of AWS CloudWatch, RHEL instances currently support only logging. For details, see AWS Cloudwatch Logs exporter.
5.3. Configuring the OpenTelemetry Collector for journald logging Copy linkLink copied to clipboard!
To configure the OpenTelemetry (OTel) Collector, you need to modify the default configuration of the filelog receiver for capturing the journald service logs. This configuration involves defining the file path, log format, and parsing rules. With this setup, the collector processes and exports logs to services, such as AWS CloudWatch logs, to improve observability and metrics analysis of system components.
Procedure
Install the
opentelemetry-collectorpackage on a RHEL instance:dnf install -y opentelemetry-collector
# dnf install -y opentelemetry-collectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the service to transfer the logs from the RHEL instance to AWS CloudWatch Logs:
systemctl enable --now opentelemetry-collector.service
# systemctl enable --now opentelemetry-collector.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow To configure the OTel Collector to forward
journaldlogs from the RHEL instance, create and edit the/etc/opentelemetry-collector/configs/10-cloudwatch-exporter.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the OTel Collector service:
systemctl restart opentelemetry-collector.service
# systemctl restart opentelemetry-collector.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create an IAM role for AWS CloudWatch agent from AWS console. For instructions, see Create IAM roles and users for use with the CloudWatch agent.
- Attach the role to the RHEL instance through AWS Console. For instructions, see Attach an IAM role to an instance.
- Restart the RHEL instance from AWS console to enable log exportation automatically.
Optional: If you no longer want to export logs, stop logs transfer from the RHEL instance:
systemctl stop opentelemetry-collector.service
# systemctl stop opentelemetry-collector.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you no longer need this service, permanently disable logs transfer:
systemctl disable opentelemetry-collector.service
# systemctl disable opentelemetry-collector.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Receivers for the OTel Collector Copy linkLink copied to clipboard!
Depending on the configuration, receivers gather telemetry based data such as logs and patterns of software use, from various devices and services at a single location for improved observability.
Journald receiver
The journald receiver in the OTel Collector captures logs from the journald service. This receiver accepts logs from system and application services, such as logs from the kernel, user, and applications, to provide improved observability. You can use journald logging for attributes like binary storage for faster indexing, user based permissions, and log size management.
For details, see config option in Journald Receiver.
5.5. Processors for the OTel Collector Copy linkLink copied to clipboard!
Processors act as an intermediary between the receiver and the exporter and manipulate the data by, for example, adding, filtering, deleting, or transforming fields. Selection and order of processor depends on the signal type.
Resource detection for AWS environment
The resource detection processor collects a list of processors and detects information about the managed environment. It manages the details for telemetry data before exportation.
For the snippet, see AWS EC2 configuration.
5.6. Exporters for the OTel Collector Copy linkLink copied to clipboard!
Exporters transmit processed data to specified devices or services, such as AWS CloudWatch Logs and the Debug exporter, based on the configuration and signal type. Exporters ensure compatibility with target services and facilitate integration with various systems.
AWS Cloudwatch Logs exporter
Note that, the given configuration currently supports only log type signals. Typically, it works as follows:
- Receiver sends logs to the OTel Collector.
- Processor processes logs in terms of modification or enhancement for exportation.
-
The
awscloudwatchlogsconfiguration sends processed telemetry to AWS CloudWatch Logs.
For details, see:
In addition, the Collector provides extensions and processors to filter sensitive data, limit memory usage, and keep telemetry data on the disk for a certain period of time in case of a connection loss.
Debug exporter
The Debug Exporter prints traces and metrics to the standard output. Note that this exporter supports all signal types. You can modify the OTel Collector YAML configuration to include a console exporter, which will print the telemetry data to the console. Also, to make sure that journald captures the output, you can configure the receiver service if required.
For details, see Debug exporter
Chapter 6. Configuring RHEL on AWS with Secure Boot Copy linkLink copied to clipboard!
Secure Boot is a mechanism in the Unified Extensible Firmware Interface (UEFI) specification, which controls the execution of programs at boot time. Secure Boot verifies digital signatures of the boot loader and its components at boot time, to ensure only trusted and authorized programs are executed, and also prevent unauthorized programs from loading. You can use this feature for both AWS Marketplace Red Hat Enterprise Linux Amazon Machine Images (AMI) and custom RHEL AMI.
6.1. Types of RHEL AMI on AWS Copy linkLink copied to clipboard!
AWS Marketplace RHEL AMI
The AWS Marketplace provides a pre-configured Red Hat Enterprise Linux (RHEL) Amazon Machine Image (AMI) that is tailored for specified use cases such as data processing, system management, and web development. This type of ready-to-use image reduces setup by minimizing manual installation and configuration time required for operating systems and software packages.
Custom RHEL AMI
A custom RHEL AMI offers flexibility to customers and organizations to build and deploy tailored environments that meet specific application and workflow requirements. By creating custom RHEL AMI, you can use RHEL instances that are pre-installed with necessary tools, configurations, and security policies. This customization aims at greater control over the infrastructure.
6.2. Understanding Secure Boot for RHEL on cloud Copy linkLink copied to clipboard!
Secure Boot is a feature of Unified Extensible Firmware Interface (UEFI) that ensures only trusted and digitally signed programs and components, such as the boot loader and kernel, are executed during boot time. Secure Boot verifies digital signatures against trusted keys stored in hardware, and aborts the boot process if it detects any components that have been tampered with or that are signed by untrusted entities. This prevents malicious software from compromising the operating system.
Secure Boot is an essential component for configuring a Confidential Virtual Machine (CVM), as it guarantees that only trusted entities are present in the boot chain. It provides authenticated access to specific device paths through defined interfaces, which ensures that only the latest configuration is used, and which also permanently overwrites earlier configurations. Additionally, when the Red Hat Enterprise Linux kernel boots with Secure Boot enabled, it enters the lockdown mode, which ensures that only kernel modules signed by a trusted vendor are loaded. Therefore, Secure Boot improves security of operating system boot sequence.
6.2.1. Components of Secure Boot Copy linkLink copied to clipboard!
The Secure Boot mechanism consists of firmware, signature databases, cryptographic keys, boot loader, hardware modules, and the operating system. The following are the components of the UEFI trusted variables:
-
Key Exchange Key database (KEK): An exchange of public keys to establish trust between the RHEL operating system and the VM firmware. You can also update Allowed Signature database (
db) and Forbidden Signature database (dbx) by using these keys. - Platform Key database (PK): A self-signed single-key database to establish trust between the VM firmware and the cloud platform. The PK also updates the KEK database.
-
Allowed Signature database (
db): A database that maintains a list of certificates or binary hashes to check whether the binary file is allowed to boot on the system. Additionally, all certificates fromdbare imported to the.platformkeyring of the RHEL kernel. This feature allows you to add and load signed third party kernel modules in thelockdownmode. -
Forbidden Signature database (
dbx): A database that maintains a list of certificates or binary hashes that are forbidden to boot on the system.
Binary files check against the dbx database and the Secure Boot Advanced Targeting (SBAT) mechanism. SBAT allows you to revoke older versions of specific binaries by keeping the certificate that has signed binaries as valid.
6.2.2. Stages of Secure Boot for RHEL on Cloud Copy linkLink copied to clipboard!
When a RHEL instance boots in the Unified Kernel Image (UKI) mode and with Secure Boot enabled, the RHEL instance interacts with the cloud service infrastructure in the following sequence:
- Initialization: When a RHEL instance boots, the cloud-hosted firmware initially boots and implements the Secure Boot mechanism.
- Variable store initialization: The firmware initializes UEFI variables from a variable store, a dedicated storage area for information that firmware needs to manage for the boot process and runtime operations. When the RHEL instance boots for the first time, the store is initialized from default values associated with the VM image.
Boot loader: When booted, the firmware loads the first stage boot loader. For the RHEL instance in a x86 UEFI environment, the first stage boot loader is shim. The shim boot loader authenticates and loads the next stage of the boot process and acts as a bridge between UEFI and GRUB.
-
The shim x86 binary in RHEL is currently signed by the
Microsoft Corporation UEFI CA 2011Microsoft certificate so that the RHEL instance can boot in the Secure Boot enabled mode on various hardware and virtualized platforms where the Allowed Signature database (db) contains the default Microsoft certificates. -
The shim binary extends the list of trusted certificates with Red Hat Secure Boot CA and optionally, with Machine Owner Key (
MOK).
-
The shim x86 binary in RHEL is currently signed by the
-
UKI: The shim binary loads the RHEL UKI (the
kernel-uki-virtpackage). UKI is signed by the corresponding certificate,Red Hat Secure Boot Signing 504on the x86_64 architecture, which can be found in theredhat-sb-certspackage. This certificate is signed by Red Hat Secure Boot CA, and thus passes the check. -
UKI add-ons: To use the UKI
cmdlineextensions, the RHEL kernel checks their signatures againstdb,MOK, and certificates shipped with shim to ensure that the extensions are signed by either the operating system vendor RHEL or a user.
When the RHEL kernel boots in the Secure Boot mode, it enters lockdown mode. After entering lockdown, the RHEL kernel adds the db keys to the .platform keyring and the MOK keys to the .machine keyring. During the kernel build process, standard RHEL kernel modules, such as kernel-modules-core, kernel-modules, kernel-modules-extra are signed with an ephemeral key, which consists of private and public key. After the completion of each kernel build, the private key becomes obsolete to sign third-party modules. Certificates from db and MOK can be used for this purpose.
6.3. Configuring a RHEL instance with Secure Boot on the AWS Marketplace Copy linkLink copied to clipboard!
To ensure that your RHEL instance on AWS has secured booting sequence, use Secure Boot. To configure a Red Hat Enterprise Linux instance with Secure Boot support on AWS, launch a RHEL Amazon Machine Image (AMI) from the AWS Marketplace, pre-configured with the uefi-preferred enabled boot mode. The uefi-preferred option enables support for the Unified Extensible Firmware Interface (UEFI) boot loader required for Secure Boot. Without UEFI, the Secure Boot feature does not work.
To avoid security issues, generate and keep private keys apart from the current RHEL instance. If Secure Boot secrets are stored on the same instance on which they are used, intruders can gain access to secrets for escalating their privileges. For more information on launching an AWS EC2 instance, see Get started with Amazon EC2.
Prerequisites
The RHEL AMI has the
uefi-preferredoption enabled in boot settings:aws ec2 describe-images --image-id <ami-099f85fc24d27c2a7> --region <us-east-2> | grep -E '"ImageId"|"Name"|"BootMode"'
$ aws ec2 describe-images --image-id <ami-099f85fc24d27c2a7> --region <us-east-2> | grep -E '"ImageId"|"Name"|"BootMode"' "ImageId": "ami-099f85fc24d27c2a7", "Name": "RHEL-9.6.0_HVM_GA-20250423-x86_64-0-Hourly2-GP3", "BootMode": "uefi-preferred"Copy to Clipboard Copied! Toggle word wrap Toggle overflow You have installed the following packages on the RHEL instance:
-
awscli2 -
python3 -
openssl -
efivar -
keyutils -
edk2-ovmf -
python3-virt-firmware
-
Procedure
Check the platform status of the RHEL Marketplace AMI instance:
mokutil --sb-state
$ mokutil --sb-state SecureBoot disabled Platform is in Setup ModeCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
setupmode allows updating the Secure Boot UEFI variables within the instance.Create a new random universally unique identifier (UUID) and store it in a system-generated text file:
uuidgen --random > GUID.txt
$ uuidgen --random > GUID.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a new
PK.keyRSA private key and a self-signedPK.cerX.509 certificate for the Platform Key database:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
opensslutility generates a common namePlatform keyfor the certificate by setting output format to Distinguished Encoding Rules (DER).Generate a new
KEK.keyRSA private key and a self-signedKEK.cerX.509 certificate for the Key Exchange Key database:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a
custom_db.cercustom certificate:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the
Microsoft Corporation UEFI CA 2011Certificate:wget https://go.microsoft.com/fwlink/p/?linkid=321194 --user-agent="Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" -O MicCorUEFCA2011_2011-06-27.crt
$ wget https://go.microsoft.com/fwlink/p/?linkid=321194 --user-agent="Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" -O MicCorUEFCA2011_2011-06-27.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Download the updated UEFI Revocation List File of forbidden signatures (
dbx) for x64 bits system:wget https://uefi.org/sites/default/files/resources/x64_DBXUpdate.bin
$ wget https://uefi.org/sites/default/files/resources/x64_DBXUpdate.binCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate UEFI variables file using the
virt-fw-varsutility:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details on the
virt-fw-varsutility, see thevirt-fw-vars(1)man page on the system.Convert UEFI variables to the Extensible Firmware Interface (EFI) Signature List (ESL) format:
python3 /usr/share/doc/python3-virt-firmware/experimental/authfiles.py \ --input VARS \ --outdir . for f in PK KEK db dbx; do tail -c +41 $f.auth > $f.esl; done
$ python3 /usr/share/doc/python3-virt-firmware/experimental/authfiles.py \ --input VARS \ --outdir . $ for f in PK KEK db dbx; do tail -c +41 $f.auth > $f.esl; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEach GUID is an assigned value and represents an EFI parameter
-
8be4df61-93ca-11d2-aa0d-00e098032b8c:EFI_GLOBAL_VARIABLE_GUID -
d719b2cb-3d3a-4596-a3bc-dad00e67656f:EFI_IMAGE_SECURITY_DATABASE_GUID
The
EFI_GLOBAL_VARIABLE_GUIDparameter maintains settings of the bootable devices and boot managers, while theEFI_IMAGE_SECURITY_DATABASE_GUIDparameter represents the image security database for Secure Boot variablesdb,dbx, and storage of required keys and certificates.-
Transfer the database certificates to the target instance, use the
efivarutility to manage UEFI environment variables.To transfer
PK.esl, enter:efivar -w -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-PK -f PK.esl
# efivar -w -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-PK -f PK.eslCopy to Clipboard Copied! Toggle word wrap Toggle overflow To transfer
KEK.esl, enter:efivar -w -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-KEK -f KEK.esl
# efivar -w -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-KEK -f KEK.eslCopy to Clipboard Copied! Toggle word wrap Toggle overflow To transfer
db.esl, enter:efivar -w -n d719b2cb-3d3a-4596-a3bc-dad00e67656f-db -f db.esl
# efivar -w -n d719b2cb-3d3a-4596-a3bc-dad00e67656f-db -f db.eslCopy to Clipboard Copied! Toggle word wrap Toggle overflow To transfer the
dbx.eslUEFI revocation list file for x64 architecture, enter:efivar -w -n d719b2cb-3d3a-4596-a3bc-dad00e67656f-dbx -f dbx.esl
# efivar -w -n d719b2cb-3d3a-4596-a3bc-dad00e67656f-dbx -f dbx.eslCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Reboot the instance from the AWS console.
Verification
Verify if Secure Boot is enabled:
mokutil --sb-state
$ mokutil --sb-state SecureBoot enabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
keyctlutility to verify the kernel keyring for the custom certificate:sudo keyctl list %:.platform
$ sudo keyctl list %:.platform 4 keys in keyring: 907254483: ---lswrv 0 0 asymmetric: Signature Database key: f064979641c24e1b935e402bdbc3d5c4672a1acc ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Configuring a RHEL instance with Secure Boot by using a custom RHEL image Copy linkLink copied to clipboard!
To ensure that your RHEL instance on AWS has secured booting sequence, use Secure Boot. When a custom RHEL Amazon machine images (AMI) is registered, the image consists of pre-stored Unified Extensible Firmware Interface (UEFI) variables for Secure Boot. This enables all the instances launched from the RHEL AMI to use the Secure Boot mechanism with the required variables on the first boot.
Prerequisites
- You have created and uploaded an AWS AMI image. For details, see Creating and uploading AWS AMI.
You have installed the following packages:
-
awscli2 -
python3 -
openssl -
efivar -
keyutils -
python3-virt-firmware
-
Procedure
Create a new random universally unique identifier (UUID) and store it in a system-generated text file:
uuidgen --random > GUID.txt
$ uuidgen --random > GUID.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a new RSA private key
PK.keyand a self-signed X.509 certificatePK.cerfor the platform Key database:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
opensslutility generates a common name platform key for the certificate by setting output format to Distinguished Encoding Rules (DER).Generate a new RSA private key
KEK.keyand a self-signed X.509 certificateKEK.cerfor the Key Exchange Key database:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a custom certificate
custom_db.cer:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the updated UEFI Revocation List File of forbidden signatures (
dbx) for 64 bit system:wget https://uefi.org/sites/default/files/resources/x64_DBXUpdate.bin
$ wget https://uefi.org/sites/default/files/resources/x64_DBXUpdate.binCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
virt-fw-varsutility to generate theaws_blob.binbinary file from keys, database certificates, and the UEFI variable store:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The customized blob consists of:
-
PK.cerwith a self-signed X.509 certificate -
KEK.cerandcustom_db.cerwith owner group GUID and Privacy Enhanced Mail (pem) format -
x64_DBXUpdate.binlist downloaded from database of excluded signatures (dbx). -
The
77fa9abd-0359-4d32-bd60-28f4e78f784bUUID is forMicCorUEFCA2011_2011-06-27.crtMicrosoft Corporation UEFI Certification Authority 2011.
-
Use the
awscli2utility to create and register the AMI from a disk snapshot with the required Secure Boot variables:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the instance from the AWS Console.
Verification
Verify if Secure Boot is enabled:
mokutil --sb-state
$ mokutil --sb-state SecureBoot enabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
keyctlutility to verify the kernel keyring for the custom certificate:sudo keyctl list %:.platform
$ sudo keyctl list %:.platform 4 keys in keyring: 907254483: ---lswrv 0 0 asymmetric: Signature Database key: f064979641c24e1b935e402bdbc3d5c4672a1acc ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow