Deploying RHEL 9 on Amazon Web Services


Red Hat Enterprise Linux 9

Obtaining RHEL system images and creating RHEL instances on AWS

Red Hat Customer Content Services

Abstract

To use Red Hat Enterprise Linux (RHEL) in a public cloud environment, you can create and deploy RHEL system images on various cloud platforms, including Amazon Web Services (AWS). You can also create and configure a Red Hat High Availability (HA) cluster on AWS.
The following chapters provide instructions for creating cloud RHEL instances and HA clusters on AWS. These processes include installing the required packages and agents, configuring fencing, and installing network resource agents.

Providing feedback on Red Hat documentation

We are committed to providing high-quality documentation and value your feedback. To help us improve, you can submit suggestions or report errors through the Red Hat Jira tracking system.

Procedure

  1. Log in to the Jira website.

    If you do not have an account, select the option to create one.

  2. Click Create in the top navigation bar.
  3. Enter a descriptive title in the Summary field.
  4. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
  5. Click Create at the bottom of the dialogue.

Public cloud platforms offer computing resources as a service. Instead of using on-premise hardware, you can run your IT workloads, including Red Hat Enterprise Linux (RHEL) systems, as public cloud instances.

1.1. Benefits of using RHEL in a public cloud

Red Hat Enterprise Linux (RHEL) cloud instances on public cloud platforms have these benefits over on-premise RHEL systems or virtual machines (VMs):

Flexible and fine-grained allocation of resources

A RHEL cloud instance runs as a VM on a cloud platform. The platform is a cluster of remote servers that the cloud service provider maintains. You can select hardware resources at the software level. For example, you can select a CPU type or storage setup.

Unlike a local RHEL system, you are not limited by what your physical host can do. Instead, you can select from many features that the cloud provider offers.

Space and cost efficiency

You do not need to own on-premise servers to host cloud workloads. This removes the space, power, and maintenance needs for physical hardware.

On public cloud platforms, you pay the cloud provider for cloud instance usage. Costs depend on the hardware you use and how long you use it. You can control costs to meet your needs.

Software-controlled configurations

You can save a cloud instance configuration as data on the cloud platform and control it with software. With this configuration, you can create, remove, clone, or migrate instances easily. You can also manage a cloud instance remotely through a cloud provider console. The instance connects to remote storage by default.

You can back up a cloud instance as a snapshot at any time. You can then load the snapshot to restore the instance to the saved state.

Separation from the host and software compatibility

Unlike a local VM, a RHEL cloud instance uses Kernel-based Virtual Machine (KVM) virtualization. The guest kernel is separate from the host operating system. It is also separate from the client system you use to connect to the instance.

You can install any operating system on the cloud instance. On a RHEL public cloud instance, you can run RHEL apps you cannot use on your local operating system.

If the instance operating system becomes unstable or compromised, it does not affect your client system.

1.2. Public cloud use cases for RHEL

Deploying applications on a public cloud offers many benefits, but might not be the most efficient solution for every scenario. If you are evaluating the migration of your Red Hat Enterprise Linux (RHEL) deployments to the public cloud, consider whether your use case will benefit from the advantages of the public cloud.

Beneficial use cases

  • Deploying public cloud instances is effective for increasing and decreasing the active computing power of your deployments, also known as scaling up and scaling down. Therefore, consider using RHEL on public cloud for the following scenarios:

    • Clusters with high peak workloads and low general performance requirements. Scaling up and down based on your demands can be efficient in terms of resource costs.
    • Setting up or expanding your clusters to a public cloud to avoid high upfront costs of setting up local servers.
  • Cloud instances are agnostic of the local environment. Therefore, you can use them for backup and disaster recovery.

Potentially problematic use cases

  • You are running an existing environment that is not flexible to migrate to a public cloud. Therefore, customizing a cloud instance to fit the specific needs of an existing deployment might not be suitable for your use case and compared to your current host platform.
  • You are operating on a tight resource budget. Maintaining your deployment in a local data center typically provides less flexibility but more control over the maximum resource costs than the public cloud.

Next steps

Moving your RHEL workloads from a local environment to a public cloud platform might raise concerns about the changes involved. The following are the most commonly asked questions.

Will my RHEL work differently as a cloud instance than as a local virtual machine?

In most respects, RHEL instances on a public cloud platform work the same as RHEL virtual machines on a local host, such as an on-premises server. Notable exceptions include:

  • Instead of private orchestration interfaces, public cloud instances use provider-specific console interfaces for managing your cloud resources.
  • Certain features, such as nested virtualization, may not work correctly. If a specific feature is critical for your deployment, check the feature’s compatibility in advance with your chosen public cloud provider.

Will my data stay safe in a public cloud as opposed to a local server?

The data in your RHEL cloud instances is in your ownership, and your public cloud provider does not have any access to it. In addition, major cloud providers support data encryption in transit, which improves the security of data when migrating your virtual machines to the public cloud.

The general security of your RHEL public cloud instances is managed as follows:

  • Your public cloud provider is responsible for the security of the cloud hypervisor
  • Red Hat provides the security features of the RHEL guest operating systems in your instances
  • You manage the specific security settings and practices in your cloud infrastructure

What effect does my geographic region have on the functionality of RHEL public cloud instances?

You can use RHEL instances on a public cloud platform regardless of your geographical location. Therefore, you can run your instances in the same region as your on-premises server.

However, hosting your instances in a physically distant region might cause high latency when operating them. In addition, depending on the public cloud provider, certain regions may provide additional features or be more cost-efficient. Before creating your RHEL instances, review the properties of the hosting regions available for your chosen cloud provider.

1.4. Obtaining RHEL for public cloud deployments

To deploy a Red Hat Enterprise Linux (RHEL) system in a public cloud environment, you need to:

  1. Select the optimal cloud provider for your use case, based on your requirements and the current offer on the market. The cloud providers currently certified for running RHEL instances are:

  2. Create a RHEL cloud instance on your chosen cloud platform. For more information, see Methods for creating RHEL cloud instances.
  3. To keep your RHEL deployment up-to-date, use Red Hat Update Infrastructure (RHUI).

1.5. Methods for creating RHEL cloud instances

To deploy a RHEL instance on a public cloud platform, you can use one of the following methods:

Expand

Create a system image of RHEL and import it to the cloud platform.

  • To create the system image, you can use the: RHEL image builder or you can build the image manually.
  • This method uses your existing RHEL subscription, and is also referred to as bring your own subscription (BYOS).
  • You pre-pay a yearly subscription, and you can use your Red Hat customer discount.
  • Your customer service is provided by Red Hat.
  • For creating multiple images effectively, you can use the cloud-init tool.

Purchase a RHEL instance directly from the cloud provider marketplace.

  • You post-pay an hourly rate for using the service. Therefore, this method is also referred to as pay as you go (PAYG).
  • Your customer service is provided by the cloud platform provider.

Chapter 2. Creating and uploading AWS AMI images

To use your customized RHEL system image in the Amazon Web Services (AWS) cloud, create the system image with Image Builder by using the respective output type, configure your system for uploading the image, and upload the image to your AWS account.

2.1. Preparing to manually upload AWS AMI images

Before uploading an AWS AMI image, you must configure a system for uploading the images.

Prerequisites

Procedure

  1. Install Python 3 and the pip tool:

    # dnf install python3 python3-pip
  2. Install the AWS command-line tools with pip:

    # pip3 install awscli
  3. Set your profile. The terminal prompts you to provide your credentials, region and output format:

    $ aws configure
    AWS Access Key ID [None]:
    AWS Secret Access Key [None]:
    Default region name [None]:
    Default output format [None]:
  4. Define a name for your bucket and create a bucket:

    $ BUCKET=bucketname
    $ aws s3 mb s3://$BUCKET

    Replace bucketname with the actual bucket name. It must be a globally unique name. As a result, your bucket is created.

  5. To grant permission to access the S3 bucket, create a vmimport S3 Role in the AWS Identity and Access Management (IAM), if you have not already done so in the past:

    1. Create a trust-policy.json file with the trust policy configuration, in the JSON format. For example:

      {
          "Version": "2022-10-17",
          "Statement": [{
              "Effect": "Allow",
              "Principal": {
                  "Service": "vmie.amazonaws.com"
              },
              "Action": "sts:AssumeRole",
              "Condition": {
                  "StringEquals": {
                      "sts:Externalid": "vmimport"
                  }
              }
          }]
      }
    2. Create a role-policy.json file with the role policy configuration, in the JSON format. For example:

      {
          "Version": "2012-10-17",
          "Statement": [{
              "Effect": "Allow",
              "Action": ["s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket"],
              "Resource": ["arn:aws:s3:::%s", "arn:aws:s3:::%s/"] }, { "Effect": "Allow", "Action": ["ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe"],
              "Resource": "*"
          }]
      }
      $BUCKET $BUCKET
    3. Create a role for your Amazon Web Services account, by using the trust-policy.json file:

      $ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
    4. Embed an inline policy document, by using the role-policy.json file:

      $ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

You can use RHEL image builder to build ami images and manually upload them directly to Amazon AWS Cloud service provider, by using the CLI.

Prerequisites

  • You have an Access Key ID configured in the AWS IAM account manager.
  • You must have a writable S3 bucket prepared. See Creating S3 bucket.
  • You have a defined blueprint.

Procedure

  1. Using the text editor, create a configuration file with the following content:

    provider = "aws"
    [settings]
    accessKeyID = "AWS_ACCESS_KEY_ID"
    secretAccessKey = "AWS_SECRET_ACCESS_KEY"
    bucket = "AWS_BUCKET"
    region = "AWS_REGION"
    key = "IMAGE_KEY"

    Replace values in the fields with your credentials for accessKeyID, secretAccessKey, bucket, and region. The IMAGE_KEY value is the name of your VM Image to be uploaded to EC2.

  2. Save the file as CONFIGURATION-FILE.toml and close the text editor.
  3. Start the compose to upload it to AWS:

    # composer-cli compose start blueprint-name image-type image-key configuration-file.toml

    Replace:

    • blueprint-name with the name of the blueprint you created
    • image-type with the ami image type.
    • image-key with the name of your VM Image to be uploaded to EC2.
    • configuration-file.toml with the name of the configuration file of the cloud provider.

      Note

      You must have the correct AWS Identity and Access Management (IAM) settings for the bucket you are going to send your customized image to. You have to set up a policy to your bucket before you are able to upload images to it.

  4. Check the status of the image build:

    # composer-cli compose status

    After the image upload process is complete, you can see the "FINISHED" status.

Verification

To confirm that the image upload was successful:

  1. Access EC2 on the menu and select the correct region in the AWS console. The image must have the available status, to indicate that it was successfully uploaded.
  2. On the dashboard, select your image and click Launch.

You can create a (.raw) image by using RHEL image builder, and choose to check the Upload to AWS checkbox to automatically push the output image that you create directly to the Amazon AWS Cloud AMI service provider.

Prerequisites

  • You must have root or wheel group user access to the system.
  • You have opened the RHEL image builder interface of the RHEL web console in a browser.
  • You have created a blueprint. See Creating a blueprint in the web console interface.
  • You must have an Access Key ID configured in the AWS IAM account manager.
  • You must have a writable S3 bucket prepared.

Procedure

  1. In the RHEL image builder dashboard, click the blueprint name that you previously created.
  2. Select the tab Images.
  3. Click Create Image to create your customized image.

    The Create Image window opens.

    1. From the Type drop-down menu list, select Amazon Machine Image Disk (.raw).
    2. Check the Upload to AWS checkbox to upload your image to the AWS Cloud and click Next.
    3. To authenticate your access to AWS, type your AWS access key ID and AWS secret access key in the corresponding fields. Click Next.

      Note

      You can view your AWS secret access key only when you create a new Access Key ID. If you do not know your Secret Key, generate a new Access Key ID.

    4. Type the name of the image in the Image name field, type the Amazon bucket name in the Amazon S3 bucket name field and type the AWS region field for the bucket you are going to add your customized image to. Click Next.
    5. Review the information and click Finish.

      Optionally, click Back to modify any incorrect detail.

      Note

      You must have the correct IAM settings for the bucket you are going to send your customized image. This procedure uses the IAM Import and Export, so you have to set up a policy to your bucket before you are able to upload images to it. For more information, see Required Permissions for IAM Users.

  4. A pop-up on the upper right informs you of the saving progress. It also informs that the image creation has been initiated, the progress of this image creation and the subsequent upload to the AWS Cloud.

    After the process is complete, you can see the Image build complete status.

  5. In a browser, access Service→EC2.

    1. On the AWS console dashboard menu, choose the correct region. The image must have the Available status, to indicate that it is uploaded.
    2. On the AWS dashboard, select your image and click Launch.
  6. A new window opens. Choose an instance type according to the resources you need to start your image. Click Review and Launch.
  7. Review your instance start details. You can edit each section if you need to make any changes. Click Launch
  8. Before you start the instance, select a public key to access it.

    You can either use the key pair you already have or you can create a new key pair.

    Follow the next steps to create a new key pair in EC2 and attach it to the new instance.

    1. From the drop-down menu list, select Create a new key pair.
    2. Enter the name to the new key pair. It generates a new key pair.
    3. Click Download Key Pair to save the new key pair on your local system.
  9. Then, you can click Launch Instance to start your instance.

    You can check the status of the instance, which displays as Initializing.

  10. After the instance status is running, the Connect button becomes available.
  11. Click Connect. A window appears with instructions on how to connect by using SSH.

    1. Select A standalone SSH client as the preferred connection method to and open a terminal.
    2. In the location you store your private key, ensure that your key is publicly viewable for SSH to work. To do so, run the command:

      $ chmod 400 <_your-instance-name.pem_>
    3. Connect to your instance by using its Public DNS:

      $ ssh -i <_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>
    4. Type yes to confirm that you want to continue connecting.

      As a result, you are connected to your instance over SSH.

Verification

  • Check if you are able to perform any action while connected to your instance by using SSH.

To set up a High Availability (HA) deployment of RHEL on Amazon Web Services (AWS), you can deploy EC2 instances of RHEL to a cluster on AWS.

Important

While you can create a custom VM from an ISO image, Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. With Image Builder, you can create and upload an Amazon Machine Image (AMI) in the ami format. See Composing a Customized RHEL System Image for more information.

Note

For a list of Red Hat products that you can use securely on AWS, see Red Hat on Amazon Web Services.

Prerequisites

3.1. Red Hat Enterprise Linux image options on AWS

The following table lists image choices and notes the differences in the image options.

Expand
Table 3.1. Image options
Image optionSubscriptionsSample scenarioConsiderations

Deploy a Red Hat Gold Image.

Use your existing Red Hat subscriptions.

Select a Red Hat Gold Image on AWS. For details on Gold Images and how to access them on Azure, see the Red Hat Cloud Access Reference Guide.

The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for Cloud Access images.

Deploy a custom image that you move to AWS.

Use your existing Red Hat subscriptions.

Upload your custom image, and attach your subscriptions.

The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for custom RHEL images.

Deploy an existing Amazon image that includes RHEL.

The AWS EC2 images include a Red Hat product.

Select a RHEL image when you launch an instance on the AWS Management Console, or choose an image from the AWS Marketplace.

You pay Amazon on an hourly basis according to the pay-as-you-go (PAYG) model. This is also known as an on-demand image. Amazon provides support for on-demand images.

Red Hat provides updates to the images. AWS makes the updates available through the Red Hat Update Infrastructure (RHUI).

To convert an on-demand, license-included EC2 instance to a bring-your-own-license (BYOL) EC2 instance of RHEL, see Convert a license type for Linux in License Manager.

Note

You can create a custom image for AWS by using RHEL Image Builder. See Composing a Customized RHEL System Image for more information.

3.2. Understanding base images

To create a base VM from an ISO image, you can use preconfigured base images and their configuration settings.

3.2.1. Using a custom base image

To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image.

3.2.2. Virtual machine configuration settings

Cloud VMs must have the following configuration settings.

Expand
Table 3.2. VM configuration settings
SettingRecommendation

ssh

ssh must be enabled to provide remote access to your VMs.

dhcp

The primary virtual adapter should be configured for dhcp.

3.3. Creating a base VM from an ISO image

To create a RHEL 9 base image from an ISO image, enable your host machine for virtualization and create a RHEL virtual machine (VM).

Prerequisites

3.3.1. Creating a base image from an ISO image

The following procedure lists the steps and initial configuration requirements for creating a custom ISO image. Once you have configured the image, you can use the image as a template for creating additional VM instances.

Prerequisites

Procedure

  1. Create and start a basic Red Hat Enterprise Linux (RHEL) VM. For instructions, see Creating virtual machines.

    1. Set the default memory and CPUs to the capacity you need for the VM and the virtual network interface to virtio.

      For example, the following command creates a kvmtest VM by using the rhel-9.0-aarch64-kvm.qcow2 image:

      # virt-install \
          --name kvmtest --memory 2048 --vcpus 2 \
          --disk rhel-9.0-aarch64-kvm.qcow2,bus=virtio \
          --import --os-variant=rhel9.0
    2. If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:

      • Do not check Immediately Start VM.
      • Change your Memory size to your preferred settings.
      • Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
  2. Review the following additional installation selection and modifications.

    • Select Minimal Install with the standard RHEL option.
    • For Installation Destination, select Custom Storage Configuration. Use the following configuration information to make your selections.

      • Ensure allocation of at least 500 MB and maximum 1 GB or more for /boot.
      • In the filesystem section, use the extended File System (XFS), ext4, or ext3 for both boot and root partitions.
    • On the Installation Summary screen, select Network and hostname. Switch Ethernet to ON.
  3. When the installation starts:

    • Create a root password.
    • Create an administrative user account.
  4. After installation is complete, reboot the VM.
  5. Log in to the root account to configure the VM.

To be able to run a RHEL instance on Amazon Web Services (AWS), you must first upload your RHEL image to AWS.

3.4.1. Installing the AWS CLI

Many of the procedures required to manage HA clusters in AWS include using the AWS CLI.

Prerequisites

  • You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI.

Procedure

  1. Install the AWS command line tools by using the dnf command.

    # dnf install awscli
  2. Use the aws --version command to verify that you installed the AWS CLI.

    $ aws --version
    aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77
  3. Configure the AWS command line client according to your AWS access details.

    $ aws configure
    AWS Access Key ID [None]:
    AWS Secret Access Key [None]:
    Default region name [None]:
    Default output format [None]:

3.4.2. Creating an S3 bucket

Importing to AWS requires an Amazon S3 bucket. An Amazon S3 bucket is an Amazon resource where you store objects. As part of the process for uploading your image, you need to create an S3 bucket and then move your image to the bucket.

Procedure

  1. Launch the Amazon S3 Console.
  2. Click Create Bucket. The Create Bucket dialog appears.
  3. In the Name and region view:

    1. Enter a Bucket name.
    2. Enter a Region.
    3. Click Next.
  4. In the Configure options view, select the desired options and click Next.
  5. In the Set permissions view, change or accept the default options and click Next.
  6. Review your bucket configuration.
  7. Click Create bucket.

    Note

    Alternatively, you can use the AWS CLI to create a bucket. For example, the aws s3 mb s3://my-new-bucket command creates an S3 bucket named my-new-bucket. See the AWS CLI Command Reference for more information about the mb command.

3.4.3. Creating the vmimport role

To be able to import a RHEL virtual machine (VM) to Amazon Web Services (AWS) by using the VM Import service, you need to create the vmimport role.

For more information, see Importing a VM as an image using VM Import/Export in the Amazon documentation.

Procedure

  1. Create a file named trust-policy.json and include the following policy. Save the file on your system and note its location.

    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Principal": { "Service": "vmie.amazonaws.com" },
             "Action": "sts:AssumeRole",
             "Condition": {
                "StringEquals":{
                   "sts:Externalid": "vmimport"
                }
             }
          }
       ]
    }
  2. Use the create role command to create the vmimport role. Specify the full path to the location of the trust-policy.json file. Prefix file:// to the path. For example:

    $ aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.json
  3. Create a file named role-policy.json and include the following policy. Replace s3-bucket-name with the name of your S3 bucket.

    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect":"Allow",
             "Action":[
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket"
             ],
             "Resource":[
                "arn:aws:s3:::s3-bucket-name",
                "arn:aws:s3:::s3-bucket-name/*"
             ]
          },
          {
             "Effect":"Allow",
             "Action":[
                "ec2:ModifySnapshotAttribute",
                "ec2:CopySnapshot",
                "ec2:RegisterImage",
                "ec2:Describe*"
             ],
             "Resource":"*"
          }
       ]
    }
  4. Use the put-role-policy command to attach the policy to the role you created. Specify the full path of the role-policy.json file. For example:

    $ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.json

3.4.4. Converting and pushing your image to S3

By using the qemu-img command, you can convert your image, so that you can push it to S3. The samples are representative; they convert an image formatted in the qcow2 file format to raw format. Amazon accepts images in OVA, VHD, VHDX, VMDK, and raw formats. See How VM Import/Export Works for more information about image formats that Amazon accepts.

Procedure

  1. Run the qemu-img command to convert your image. For example:

    # qemu-img convert -f qcow2 -O raw rhel-9.0-sample.qcow2 rhel-9.0-sample.raw
  2. Push the image to S3.

    $ aws s3 cp rhel-9.0-sample.raw s3://s3-bucket-name
    Note

    This procedure could take a few minutes. After completion, you can check that your image uploaded successfully to your S3 bucket by using the AWS S3 Console.

3.4.5. Importing your image as a snapshot

To launch a RHEL instance in the Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you must first upload a snapshot of your RHEL system image to EC2.

Procedure

  1. Create a file to specify a bucket and path for your image. Name the file containers.json. In the sample that follows, replace s3-bucket-name with your bucket name and s3-key with your key. You can get the key for the image by using the Amazon S3 Console.

    {
        "Description": "rhel-9.0-sample.raw",
        "Format": "raw",
        "UserBucket": {
            "S3Bucket": "s3-bucket-name",
            "S3Key": "s3-key"
        }
    }
  2. Import the image as a snapshot. This example uses a public Amazon S3 file; you can use the Amazon S3 Console to change permissions settings on your bucket.

    $ aws ec2 import-snapshot --disk-container file://containers.json

    The terminal displays a message such as the following. Note the ImportTaskID within the message.

    {
        "SnapshotTaskDetail": {
            "Status": "active",
            "Format": "RAW",
            "DiskImageSize": 0.0,
            "UserBucket": {
                "S3Bucket": "s3-bucket-name",
                "S3Key": "rhel-9.0-sample.raw"
            },
            "Progress": "3",
            "StatusMessage": "pending"
        },
        "ImportTaskId": "import-snap-06cea01fa0f1166a8"
    }
  3. Track the progress of the import by using the describe-import-snapshot-tasks command. Include the ImportTaskID.

    $ aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8

    The returned message shows the current status of the task. When complete, Status shows completed. Within the status, note the snapshot ID.

3.4.6. Creating an AMI from the uploaded snapshot

To launch a RHEL instance in Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you can use a RHEL system snapshot that you previously uploaded.

Procedure

  1. Go to the AWS EC2 Dashboard.
  2. Under Elastic Block Store, select Snapshots.
  3. Search for your snapshot ID (for example, snap-0e718930bd72bcda0).
  4. Right-click on the snapshot and select Create image.
  5. Name your image.
  6. Under Virtualization type, choose Hardware-assisted virtualization.
  7. Click Create. In the note regarding image creation, there is a link to your image.
  8. Click on the image link. Your image shows up under Images>AMIs.

    Note

    Alternatively, you can use the AWS CLI register-image command to create an AMI from a snapshot. See register-image for more information. An example follows.

    $ aws ec2 register-image \
        --name "myimagename" --description "myimagedescription" --architecture x86_64 \
        --virtualization-type hvm --root-device-name "/dev/sda1" --ena-support \
        --block-device-mappings "{\"DeviceName\": \"/dev/sda1\",\"Ebs\": {\"SnapshotId\": \"snap-0ce7f009b69ab274d\"}}"

    You must specify the root device volume /dev/sda1 as your root-device-name. For conceptual information about device mapping for AWS, see Example block device mapping.

3.4.7. Launching an instance from the AMI

To launch and configure an Amazon Elastic Compute Cloud (EC2) instance, use an Amazon Machine Image (AMI).

Procedure

  1. From the AWS EC2 Dashboard, select Images and then AMIs.
  2. Right-click on your image and select Launch.
  3. Choose an Instance Type that meets or exceeds the requirements of your workload.

    See Amazon EC2 Instance Types for information about instance types.

  4. Click Next: Configure Instance Details.

    1. Enter the Number of instances you want to create.
    2. For Network, select the VPC you created when setting up your AWS environment. Select a subnet for the instance or create a new subnet.
    3. Select Enable for Auto-assign Public IP.

      Note

      These are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements.

  5. Click Next: Add Storage. Verify that the default storage is sufficient.
  6. Click Next: Add Tags.

    Note

    Tags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging.

  7. Click Next: Configure Security Group. Select the security group you created when setting up your AWS environment.
  8. Click Review and Launch. Verify your selections.
  9. Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when setting up your AWS environment.

    Note

    Verify that the permissions for your private key are correct. Use the command options chmod 400 <keyname>.pem to change the permissions, if necessary.

  10. Click Launch Instances.
  11. Click View Instances. You can name the instance(s).

    You can now launch an SSH session to your instance(s) by selecting an instance and clicking Connect. Use the example provided for A standalone SSH client.

    Note

    Alternatively, you can launch an instance by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.

3.4.8. Attaching Red Hat subscriptions

Using the subscription-manager command, you can register and attach your Red Hat subscription to a RHEL instance.

Prerequisites

  • You must have enabled your subscriptions.

Procedure

  1. Register your system.

    # subscription-manager register
  2. Attach your subscriptions.

  3. Optional: To collect various system metrics about the instance in the Red Hat Hybrid Cloud Console, you can register the instance with Red Hat Lightspeed.

    # insights-client register --display-name <display_name_value>

    For information about further configuration of Red Hat Lightspeed, see Client Configuration Guide for Red Hat Lightspeed.

To deploy Red Hat Enterprise Linux (RHEL) virtual machines (VMs) on Amazon Web Services (AWS), you can set up RHEL Gold Images to automatically register with the Red Hat Subscription Manager (RHSM).

Prerequisites

  • You have downloaded the latest RHEL Gold Image for AWS. For instructions, see Using Gold Images on AWS.

    Note

    At a time, you can only attach an AWS account to a single Red Hat account. Therefore, ensure no other users require access to the AWS account before attaching it to your Red Hat one.

Procedure

  1. Upload the Gold Image to AWS. For instructions, see Uploading the Red Hat Enterprise Linux image to AWS.
  2. Create VMs by using the uploaded image. They will be automatically subscribed with RHSM.

Verification

  • In a RHEL VM created using the above instructions, verify the system is registered with RHSM by executing the subscription-manager identity command. On a successfully registered system, this displays the UUID of the system. For example:

    # subscription-manager identity
    system identity: fdc46662-c536-43fb-a18a-bbcb283102b7
    name: 192.168.122.222
    org name: 6340056
    org ID: 6340056

To redistribute workloads automatically in case of node failure, you can create Red Hat High Availability (HA) clusters on Amazon Web Services (AWS). Even on AWS, you can host these HA clusters.

Creating RHEL HA clusters on AWS is similar to creating HA clusters in non-cloud environments. For details on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS.

A high-availability (HA) cluster is a set of computers, also known as nodes, linked together to run a specific workload. The purpose of HA clusters is to offer redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes. No noticeable downtime occurs in the services that are running on the cluster.

You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:

  • Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
  • Scalability: You can start additional nodes when demand is high and stop them when demand is low.
  • Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
  • Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.

To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.

Before installing the AWS CLI, you must create an AWS Access Key and AWS Secret Access Key. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster.

Prerequisites

Procedure

  1. Launch the AWS Console.
  2. Click on your AWS Account ID to display the drop-down menu and select My Security Credentials.
  3. Click Users.
  4. Select the user and open the Summary screen.
  5. Click the Security credentials tab.
  6. Click Create access key.
  7. Download the .csv file (or save both keys). You need to enter these keys when creating the fencing device.

4.3. Creating an HA EC2 instance

To ensure High Availability (HA) for your Red Hat Enterprise Linux (RHEL) cluster nodes and applications in Amazon Web Services (AWS), you can create HA EC2 instances configured as cluster nodes.

For details about obtaining RHEL images, see Image options on AWS.

Prerequisites

Procedure

  1. From the AWS EC2 Dashboard, select Images and then AMIs.
  2. Right-click the image you want to use and select Launch.
  3. Choose an Instance Type that meets or exceeds the requirements of your workload. Depending on your HA application, each instance requires different capacity.

    See Amazon EC2 Instance Types for information about instance types.

  4. Click Next: Configure Instance Details.

    1. Enter the Number of instances you want to create for the cluster. This example procedure uses three cluster nodes.

      Note

      Do not launch into an Auto Scaling Group.

    2. For Network, select the virtual private cloud (VPC) you created in Set up the AWS environment. Select the subnet for the instance to create a new subnet.
    3. Select Enable for Auto-assign Public IP. These are the minimum selections you need to make for Configure Instance Details. Depending on your specific HA application, you can make additional selections.

      Note

      These are the minimum configuration options necessary to create a basic instance. Review additional options based on your HA application requirements.

  5. Click Next: Add Storage and verify that you have required storage for your HA application. You do not need to change these settings, unless your HA application requires other storage options.
  6. Click Next: Configure Security Group. Select the existing security group you created in Setting up the AWS environment.
  7. Click Review and Launch and verify your selections.
  8. Click Launch. Select an existing key pair or create a new key pair. For selecting a key pair, see Setting up the AWS environment.
  9. Click Launch Instances.
  10. Click View Instances. You can name the instance(s).

    Note

    Also, you can launch instances by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.

4.4. Configuring the private key

Before using the the private SSH key file (.pem) for SSH communication, you need to configure permissions of the private key.

Prerequisites

Procedure

  1. Move the key file from the Downloads directory to your Home directory or to your ~/.ssh directory.
  2. Change the permissions of the key file so that only the root user can read it:

    # chmod 400 KeyName.pem

4.5. Connecting to an EC2 instance

Using the AWS Console on all nodes, you can connect to an EC2 instance.

Prerequisites

Procedure

  1. Launch the AWS Console and select the EC2 instance.
  2. Click Connect and select A standalone SSH client.
  3. From your SSH terminal session, connect to the instance by using the AWS example provided in the pop-up window. Add the correct path to your KeyName.pem file if the path is not shown in the example.

Before configuring a Red Hat High Availability cluster on AWS, you must install the High Availability packages and agents on each of the nodes.

Prerequisites

Procedure

  1. Remove the AWS Red Hat Update Infrastructure (RHUI) client.

    $ sudo -i
    # dnf -y remove rh-amazon-rhui-client*
  2. Register the VM with Red Hat.

    # subscription-manager register
  3. Disable all repositories.

    # subscription-manager repos --disable=*
  4. Enable the RHEL 9 Server HA repositories.

    # subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms
  5. Update the RHEL AWS instance.

    # dnf update -y
  6. Install the Red Hat High Availability Add-On software packages, along with the AWS fencing agent from the High Availability channel.

    # dnf install pcs pacemaker fence-agents-aws
  7. The user hacluster was created during the pcs and pacemaker installation in the previous step. Create a password for hacluster on all cluster nodes. Use the same password for all nodes.

    # passwd hacluster
  8. Add the high availability service to the RHEL Firewall if firewalld.service is installed.

    # firewall-cmd --permanent --add-service=high-availability
    # firewall-cmd --reload
  9. Start the pcs service and enable it to start on boot.

    # systemctl start pcsd.service
    # systemctl enable pcsd.service
  10. Edit /etc/hosts and add RHEL host names and internal IP addresses. For more information, see the Red Hat Knowledgebase solution How should the /etc/hosts file be set up on RHEL cluster nodes?.

Verification

  • Ensure the pcs service is running.

    # systemctl status pcsd.service
    
    pcsd.service - PCS GUI and remote configuration interface
    Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
    Active: active (running) since Thu 2018-03-01 14:53:28 UTC; 28min ago
    Docs: man:pcsd(8)
    man:pcs(8)
    Main PID: 5437 (pcsd)
    CGroup: /system.slice/pcsd.service
         └─5437 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &
    Mar 01 14:53:27 ip-10-0-0-48.ec2.internal systemd[1]: Starting PCS GUI and remote configuration interface…
    Mar 01 14:53:28 ip-10-0-0-48.ec2.internal systemd[1]: Started PCS GUI and remote configuration interface.

4.7. Creating a cluster

Create a Red Hat High Availability cluster on a public cloud platform by configuring and initializing the cluster nodes.

Procedure

  1. On one of the nodes, enter the following command to authenticate the pcs user hacluster. In the command, specify the name of each node in the cluster.

    # pcs host auth <hostname1> <hostname2> <hostname3>

    Example:

    [root@node01 clouduser]# pcs host auth node01 node02 node03
    Username: hacluster
    Password:
    node01: Authorized
    node02: Authorized
    node03: Authorized
  2. Create the cluster.

    # pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>

    Example:

    [root@node01 clouduser]# pcs cluster setup new_cluster node01 node02 node03
    
    [...]
    
    Synchronizing pcsd certificates on nodes node01, node02, node03...
    node02: Success
    node03: Success
    node01: Success
    Restarting pcsd on the nodes in order to reload the certificates...
    node02: Success
    node03: Success
    node01: Success

Verification

  1. Enable the cluster.

    [root@node01 clouduser]# pcs cluster enable --all
    node02: Cluster Enabled
    node03: Cluster Enabled
    node01: Cluster Enabled
  2. Start the cluster.

    [root@node01 clouduser]# pcs cluster start --all
    node02: Starting Cluster...
    node03: Starting Cluster...
    node01: Starting Cluster...

4.8. Configuring fencing on a RHEL AWS cluster

Fencing configuration automatically isolates a malfunctioning node on your Red Hat Enterprise Linux (RHEL) Amazon Web Services (AWS) cluster to prevent the node from compromising functionality and consuming the resources of the cluster.

To configure fencing on an AWS cluster, use one of the following methods:

  • A standard procedure for default configuration.
  • An alternate configuration procedure for more advanced configuration, focused on automation.

4.8.1. Configuring fencing with default settings

Fencing isolates malfunctioned or unresponsive nodes for data integrity and cluster availability by using Amazon Web Services (AWS) resources and cluster management tools for automated node management. A standard approach for configuring fencing with default settings in a Red Hat Enterprise Linux (RHEL) high availability cluster on Amazon Web Services (AWS).

Prerequisites

Procedure

  1. Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information.

    # echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)

    Example:

    [root@ip-10-0-0-48 ~]# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6
  2. Enter the following command to configure the fence device. Use the pcmk_host_map command to map the RHEL hostname to the Instance ID. Use the AWS Access Key and AWS Secret Access Key that you earlier set up.

    # pcs stonith \
        create <name> fence_aws access_key=access-key secret_key=<secret-access-key> \
        region=<region> pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" \
        power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4

    Example:

    [root@ip-10-0-0-48 ~]# pcs stonith \
    create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ \
    region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" \
    power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4
  3. To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Configuring ACPI for use with integrated fence devices

4.8.2. Configuring fencing for a VPC cluster

An alternate approach for configuring fencing for a virtual private cloud (VPC) cluster in a Red Hat Enterprise Linux (RHEL) high availability cluster on Amazon Web Services (AWS). Fencing isolates malfunctioned or unresponsive nodes to keep data integrity and cluster availability, using AWS resources and cluster management tools for automated node management.

Prerequisites

Procedure

  1. Obtain the VPC ID of the cluster.

    $ aws ec2 describe-vpcs --output text --filters "Name=tag:Name,Values=<clustername>-vpc" --query 'Vpcs[*].VpcId'
    vpc-06bc10ac8f6006664
  2. By using the VPC ID of the cluster, obtain the VPC instances.

    $ aws ec2 describe-instances --output text --filters "Name=vpc-id,Values=vpc-06bc10ac8f6006664" --query 'Reservations[*].Instances[*].{Name:Tags[?Key==Name]|[0].Value,Instance:InstanceId}' | grep "\-node[a-c]"
    
    i-0b02af8927a895137     <clustername>-nodea-vm
    i-0cceb4ba8ab743b69     <clustername>-nodeb-vm
    i-0502291ab38c762a5     <clustername>-nodec-vm
  3. Use the obtained instance IDs to configure fencing on each node on the cluster. For example, to configure a fencing device on all nodes in a cluster:

    [root@nodea ~]# CLUSTER=<clustername> && pcs stonith create fence${CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=$(for NODE \
    in node{a..c}; do ssh ${NODE} "echo -n \${HOSTNAME}:\$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\;"; done) \
    pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    For information about specific parameters for creating fencing devices, see the fence_aws man page or the Configuring and managing high availability clusters guide.

  4. To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device.

Verification

  1. Display the configured fencing devices and their parameters on your nodes:

    [root@nodea ~]# pcs stonith config fence${CLUSTER}
    
    Resource: <clustername> (class=stonith type=fence_aws)
    Attributes: access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=nodea:i-0b02af8927a895137;nodeb:i-0cceb4ba8ab743b69;nodec:i-0502291ab38c762a5;
    pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    Operations: monitor interval=60s (<clustername>-monitor-interval-60s)
  2. Test the fencing agent for one of the cluster nodes.

    # pcs stonith fence <awsnodename>
    Note

    The command response might take several minutes to display. If you check the active terminal session for the fencing node, you might see the connection to the terminal drop immediately after you enter the fence command.

    Example:

    [root@ip-10-0-0-48 ~]# pcs stonith fence ip-10-0-0-58
    
    Node: ip-10-0-0-58 fenced
  3. Check the status of the fenced node:

    # pcs status

    Example:

    [root@ip-10-0-0-48 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 19:55:41 2018
    Last change: Fri Mar  2 19:24:59 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    1 resource configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ]
    OFFLINE: [ ip-10-0-0-58 ]
    
    Full list of resources:
    clusterfence  (stonith:fence_aws):    Started ip-10-0-0-46
    
    Daemon Status:
    corosync: active/disabled
    pacemaker: active/disabled
    pcsd: active/enabled
  4. Start the fenced node from the earlier step:

    # pcs cluster start <awshostname>
  5. Check the status to verify the node started.

    # pcs status

    Example:

    [root@ip-10-0-0-48 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 20:01:31 2018
    Last change: Fri Mar  2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48
    
    3 nodes configured
    1 resource configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
      clusterfence  (stonith:fence_aws):    Started ip-10-0-0-46
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled

4.9. Installing the AWS CLI on cluster nodes

Earlier, you installed the AWS CLI on your host system. You need to install the AWS CLI on cluster nodes to configure the network resource agents. The following steps are applicable to each node in the cluster.

Prerequisites

Procedure

  • Verify that the AWS CLI is configured correctly where the instance IDs and instance names should be displayed:

    Example:

    [root@ip-10-0-0-48 ~]# aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==Name].Value]'
    
    i-07f1ac63af0ec0ac6  ip-10-0-0-48
    i-063fc5fe93b4167b2  ip-10-0-0-46
    i-08bd39eb03a6fd2c7  ip-10-0-0-58

4.10. Setting up IP address resources on AWS

To ensure that clients that use IP addresses to access resources managed by the cluster over the network can access the resources if a failover occurs, the cluster must include IP address resources, which use specific network resource agents.

The RHEL HA Add-On provides a set of resource agents, which create IP address resources to manage various types of IP addresses on AWS. To decide which resource agent to configure, consider the type of AWS IP addresses that you want the HA cluster to manage:

  • To manage an IP address exposed to the internet, use the awseip network resource.
  • To manage a private IP address limited to a single AWS Availability Zone (AZ), use the awsvip and IPaddr2 network resources.
  • To manage an IP address that can move across multiple AWS AZs within the same AWS region, use the aws-vpc-move-ip network resource.
Note

If the HA cluster does not manage any IP addresses, the resource agents for managing virtual IP addresses on AWS are not required. If you need further guidance for your specific deployment, consult with your AWS provider.

Configure an Amazon Web Services (AWS) Secondary Elastic IP Address (awseip) resource. Use an elastic IP address for public-facing internet connections on Red Hat Enterprise Linux (RHEL) High Availability (HA) cluster nodes.

Prerequisites

Procedure

  1. Install the resource-agents-cloud package.

    # dnf install resource-agents-cloud
  2. Using the AWS command-line interface (CLI), create an elastic IP address.

    [root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text
    
    eipalloc-4c4a2c45   vpc 35.169.153.122
  3. Optional: Display the description of awseip. This shows the options and default operations for this agent.

    # pcs resource describe awseip
  4. Create a resource group which has the secondary elastic IP address and allocated IP address that you earlier specified using the AWS CLI:

    # pcs resource create <resource_id> awseip elastic_ip=<elastic_ip_address> allocation_id=<elastic_ip_association_id> --group <resource_group_name>

Verification

  1. Display the status of the cluster to verify that the required resources are running.

    # pcs status

    The following output shows an example running cluster where the vip and elastic resources are part of the networking-group resource group:

    [root@ip-10-0-0-58 ~]# pcs status
    
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Mon Mar  5 16:27:55 2018
    Last change: Mon Mar  5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    4 resources configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
     clusterfence   (stonith:fence_aws):    Started ip-10-0-0-46
     Resource Group: networking-group
         vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48
         elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled
  2. Launch an SSH session from your local workstation to the elastic IP address that you created earlier:

    $ ssh -l <user_name> -i ~/.ssh/<keyname>.pem <elastic_ip_address>

    Example:

    $ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122

Verification

  • Verify that the SSH connected host is the same host as the one associated with the elastic resource created.

Configure an Amazon Web Services (AWS) secondary private IP address (awsvip) resource on a node of a Red Hat High Availability (HA) cluster. Use awsvip to limit the IP address to a single availability zone and HA clients.

You can connect and access HA clients to a Red Hat Enterprise Linux (RHEL) node that uses the private IP address.

Prerequisites

Procedure

  1. Install the resource-agents-cloud package.

    # dnf install resource-agents-cloud
  2. Optional: View the awsvip description. This shows the options and default operations for this agent.

    # pcs resource describe awsvip
  3. Create a secondary private IP address with an unused private IP address in the virtual private cloud (VPC) classless inter-domain routing (CIDR) VPC CIDR block. In addition, create a resource group for the secondary private IP address:

    # pcs resource create <example_resource_id> awsvip secondary_private_ip=<example_unused_private_IP_address> --group <example_group_name>

    Example:

    [root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group
  4. Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet. Ensure that the virtual IP belongs to the same resource group as the Secondary Private IP address you created in the earlier step:

    # pcs resource create <example_resource_id> IPaddr2 ip=<example_secondary_private_IP> --group <example_group_name>

    Example:

    root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group

    Verification

    • Display the status of the cluster to verify that the required resources are running.

      # pcs status

      The following output shows an example running cluster where the vip and privip resources are active in a the networking-group resource group:

      [root@ip-10-0-0-48 ~]# pcs status
      
      Cluster name: newcluster
      Stack: corosync
      Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
      Last updated: Fri Mar  2 22:34:24 2018
      Last change: Fri Mar  2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46
      
      3 nodes configured
      3 resources configured
      
      Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
      
      Full list of resources:
      
      clusterfence    (stonith:fence_aws):    Started ip-10-0-0-46
       Resource Group: networking-group
           privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48
           vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58
      
      Daemon Status:
        corosync: active/disabled
        pacemaker: active/disabled
        pcsd: active/enabled

Configure an aws-vpc-move-ip resource to use an elastic IP address. You can use this resource to ensure high-availability (HA) clients on Amazon Web Services (AWS) can access a Red Hat Enterprise Linux (RHEL) node that can be moved across multiple AWS Availability Zones within the same AWS region.

Prerequisites

Procedure

  1. Install the resource-agents-cloud package.

    # dnf install resource-agents-cloud
  2. Optional: View the aws-vpc-move-ip description. This shows the options and default operations for this agent.

    # pcs resource describe aws-vpc-move-ip
  3. Set up an OverlayIPAgent IAM policy for the IAM user.

    1. In the AWS console, navigate to ServicesIAMPoliciesCreate OverlayIPAgent Policy
    2. Input the following configuration, and change the <region>, <account-id>, and <ClusterRouteTableID> values to correspond with your cluster.

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "Stmt1424870324000",
                  "Effect": "Allow",
                  "Action":  "ec2:DescribeRouteTables",
                  "Resource": "*"
              },
              {
                  "Sid": "Stmt1424860166260",
                  "Action": [
                      "ec2:CreateRoute",
                      "ec2:ReplaceRoute"
                  ],
                  "Effect": "Allow",
                  "Resource": "arn:aws:ec2:<region>:<account-id>:route-table/<ClusterRouteTableID>"
              }
          ]
      }
  4. In the AWS console, disable the Source/Destination Check function on all nodes in the cluster.

    To do this, right-click each node → NetworkingChange Source/Destination Checks. In the pop-up message that appears, click Yes, Disable.

  5. Create a route table for the cluster. To do so, use the following command on one node in the cluster:

    # aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>

    In the command, replace values as follows:

    • ClusterRouteTableID: The route table ID for the existing cluster VPC route table.
    • NewCIDRblockIP/NetMask: A new IP address and netmask outside of the VPC classless inter-domain routing (CIDR) block. For example, if the VPC CIDR block is 172.31.0.0/16, the new IP address/netmask can be 192.168.0.15/32.
    • ClusterNodeID: The instance ID for another node in the cluster.
  6. On one of the nodes in the cluster, create a aws-vpc-move-ip resource that uses a free IP address that is accessible to the client. The following example creates a resource named vpcip that uses IP 192.168.0.15.

    # pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>
  7. On all nodes in the cluster, edit the /etc/hosts/ file, and add a line with the IP address of the newly created resource. For example:

    192.168.0.15 vpcip

Verification

  1. Test the failover ability of the new aws-vpc-move-ip resource:

    # pcs resource move vpcip
  2. If the failover succeeded, remove the automatically created constraint after the move of the vpcip resource:

    # pcs resource clear vpcip

4.11. Configuring shared block storage

To create storage resources, you can configure shared block storage for a Red Hat High Availability cluster by using Amazon Elastic Block Storage (EBS) for multi-attach volumes.

Prerequisites

Procedure

  1. Create a shared block volume by using the AWS command create-volume.

    $ aws ec2 create-volume --availability-zone <availability_zone> --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled

    For example, the following command creates a volume in the us-east-1a availability zone.

    $ aws ec2 create-volume --availability-zone us-east-1a --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled
    
    {
        "AvailabilityZone": "us-east-1a",
        "CreateTime": "2020-08-27T19:16:42.000Z",
        "Encrypted": false,
        "Size": 1024,
        "SnapshotId": "",
        "State": "creating",
        "VolumeId": "vol-042a5652867304f09",
        "Iops": 51200,
        "Tags": [ ],
        "VolumeType": "io1"
    }
    Note

    You need the VolumeId in the next step.

  2. For each instance in your cluster, attach a shared block volume by using the AWS command attach-volume. Use your <instance_id> and <volume_id>.

    $ aws ec2 attach-volume --device /dev/xvdd --instance-id <instance_id> --volume-id <volume_id>

    For example, the following command attaches a shared block volume vol-042a5652867304f09 to instance i-0eb803361c2c887f2.

    $ aws ec2 attach-volume --device /dev/xvdd --instance-id i-0eb803361c2c887f2 --volume-id vol-042a5652867304f09
    
    {
        "AttachTime": "2020-08-27T19:26:16.086Z",
        "Device": "/dev/xvdd",
        "InstanceId": "i-0eb803361c2c887f2",
        "State": "attaching",
        "VolumeId": "vol-042a5652867304f09"
    }

Verification

  1. For each instance in your cluster, verify that the block device is available by using the ssh command with your instance <ip_address>.

    # ssh <ip_address> "hostname ; lsblk -d | grep ' 1T '"

    For example, the following command lists details including the hostname and block device for the instance IP 198.51.100.3.

    # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T '"
    
    nodea
    nvme2n1 259:1    0   1T  0 disk
  2. Use the ssh command to verify that each instance in your cluster uses the same shared disk.

    # ssh <ip_address> "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"

    For example, the following command lists details including the hostname and shared disk volume ID for the instance IP address 198.51.100.3.

    # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"
    
    nodea
    E: ID_SERIAL=Amazon Elastic Block Store_vol0fa5342e7aedf09f7

When running RHEL on Amazon Web Services (AWS), you can use the OpenTelemetry (OTel) framework to maintain and debug your RHEL instances.

RHEL includes the OTel Collector service, which you can use to manage logs. The OTel Collector gathers, processes, transforms, and exports logs to and from various formats and external back ends. You can also use the OTel Collector to aggregate the collected data and generate metrics useful for analytics services.

5.1. How the OpenTelemetry Collector works

For RHEL on AWS, you can configure the OTel Collector service to receive, process, and export logs between the RHEL instance and the AWS telemetry analytics service to automatically manage telemetry data on your RHEL instance. The OTel Collector is a component of the OTel ecosystem, and has three stages in its workflow: a receiver, a processor, and an exporter.

You can configure the workflow for any of these components in a YAML file based on your specific use case. Typically, the OTel Collector works as follows:

  1. A receiver collects telemetry data from data sources, such as applications and services.
  2. After the receiver ingest data, it passes to a processing phase, in which a chain of processors may be defined to transform the data.
  3. The exporter sends the telemetry data to the required destination.

Integrating OTel with Amazon Web Services (AWS) for log management involves configuring the OTel Collector to use RHEL on AWS as exporter for logs. It works as follows:

  • Configuring the exporter for the OTel Collector
  • Enabling log connections
  • Exporting data from the RHEL instance to AWS CloudWatch logs.

As a result, you can gather log data from various sources at a single location to effectively manage log analysis.

From the available features of AWS CloudWatch, RHEL instances currently support only logging. For details, see AWS Cloudwatch Logs exporter.

To configure the OpenTelemetry (OTel) Collector, you need to modify the default configuration of the filelog receiver for capturing the journald service logs. This configuration involves defining the file path, log format, and parsing rules. With this setup, the collector processes and exports logs to services, such as AWS CloudWatch logs, to improve observability and metrics analysis of system components.

Procedure

  1. Install the opentelemetry-collector package on a RHEL instance:

    # dnf install -y opentelemetry-collector
  2. Enable and start the service to transfer the logs from the RHEL instance to AWS CloudWatch Logs:

    # systemctl enable --now opentelemetry-collector.service
  3. To configure the OTel Collector to forward journald logs from the RHEL instance, create and edit the /etc/opentelemetry-collector/configs/10-cloudwatch-exporter.yaml file:

    ...
    exporters:
      awscloudwatchlogs:
        log_group_name: testing-logs-emf
        log_stream_name: testing-integrations-stream-emf
        raw_log: true
        region: us-east-1
        endpoint: logs.us-east-1.amazonaws.com
        log_retention: 365
        tags:
          sampleKey: sampleValue
    service:
      pipelines:
        logs:
          receivers:
            - journald
          exporters:
            - awscloudwatchlogs
    ...
  4. Restart the OTel Collector service:

    # systemctl restart opentelemetry-collector.service
  5. Create an IAM role for AWS CloudWatch agent from AWS console. For instructions, see Create IAM roles and users for use with the CloudWatch agent.
  6. Attach the role to the RHEL instance through AWS Console. For instructions, see Attach an IAM role to an instance.
  7. Restart the RHEL instance from AWS console to enable log exportation automatically.
  8. Optional: If you no longer want to export logs, stop logs transfer from the RHEL instance:

    # systemctl stop opentelemetry-collector.service
  9. Optional: If you no longer need this service, permanently disable logs transfer:

    # systemctl disable opentelemetry-collector.service

5.4. Receivers for the OTel Collector

Depending on the configuration, receivers gather telemetry based data such as logs and patterns of software use, from various devices and services at a single location for improved observability.

Journald receiver

The journald receiver in the OTel Collector captures logs from the journald service. This receiver accepts logs from system and application services, such as logs from the kernel, user, and applications, to provide improved observability. You can use journald logging for attributes like binary storage for faster indexing, user based permissions, and log size management.

For details, see config option in Journald Receiver.

5.5. Processors for the OTel Collector

Processors act as an intermediary between the receiver and the exporter and manipulate the data by, for example, adding, filtering, deleting, or transforming fields. Selection and order of processor depends on the signal type.

Resource detection for AWS environment

The resource detection processor collects a list of processors and detects information about the managed environment. It manages the details for telemetry data before exportation.

For the snippet, see AWS EC2 configuration.

5.6. Exporters for the OTel Collector

Exporters transmit processed data to specified devices or services, such as AWS CloudWatch Logs and the Debug exporter, based on the configuration and signal type. Exporters ensure compatibility with target services and facilitate integration with various systems.

AWS Cloudwatch Logs exporter

Note that, the given configuration currently supports only log type signals. Typically, it works as follows:

  • Receiver sends logs to the OTel Collector.
  • Processor processes logs in terms of modification or enhancement for exportation.
  • The awscloudwatchlogs configuration sends processed telemetry to AWS CloudWatch Logs.

For details, see:

In addition, the Collector provides extensions and processors to filter sensitive data, limit memory usage, and keep telemetry data on the disk for a certain period of time in case of a connection loss.

Debug exporter

The Debug Exporter prints traces and metrics to the standard output. Note that this exporter supports all signal types. You can modify the OTel Collector YAML configuration to include a console exporter, which will print the telemetry data to the console. Also, to make sure that journald captures the output, you can configure the receiver service if required.

For details, see Debug exporter

To enhance boot security for a Red Hat Enterprise Linux (RHEL) instance on Amazon Web Services (AWS), configure Secure Boot. Secure Boot verifies the digital signatures of the boot loader and other components at startup, allowing only trusted programs to load while blocking unauthorized ones.

6.1. Understanding secure boot for RHEL on cloud

When Secure Boot detects any tampered components or components signed by untrusted entities, it aborts the boot process. Secure Boot plays a critical role in configuring a Confidential Virtual Machine (CVM) by ensuring that only trusted entities participate in the boot chain.

Secure Boot is a Unified Extensible Firmware Interface (UEFI) feature that verifies digital signatures of boot components, such as boot loader and kernel, against trusted keys stored in hardware. Secure Boot prevents unauthorized or tampered software from running during boot, protecting your system from malicious code. It authenticates access to specific device paths through defined interfaces, enforces the use of the latest configuration, and permanently overwrites earlier configurations. When the Red Hat Enterprise Linux (RHEL) kernel boots with Secure Boot enabled, it enters the lockdown mode, allowing only kernel modules signed by a trusted vendor to load. Therefore, Secure Boot strengthens the security of the operating system boot sequence.

Components of Secure Boot

The Secure Boot mechanism consists of firmware, signature databases, cryptographic keys, boot loader, hardware modules, and the operating system. The following are the components of the UEFI trusted variables:

  • Key Exchange Key database (KEK): An exchange of public keys to establish trust between the RHEL operating system and the VM firmware. You can also update Allowed Signature database (db) and Forbidden Signature database (dbx) by using these keys.
  • Platform Key database (PK): A self-signed single-key database to establish trust between the VM firmware and the cloud platform. The PK also updates the KEK database.
  • Allowed Signature database (db): A database that maintains a list of certificates or binary hashes to check whether the binary file can boot on the system. Additionally, all certificates from db are imported to the .platform keyring of the RHEL kernel. With this feature, you can add and load signed third party kernel modules in the lockdown mode.
  • Forbidden Signature database (dbx): A database that maintains a list of certificates or binary hashes that are not allowed to boot on the system.
Note

Binary files check against the dbx database and the Secure Boot Advanced Targeting (SBAT) mechanism. With SBAT, you can revoke older versions of specific binaries by keeping the certificate that has signed binaries as valid.

Stages of Secure Boot for RHEL on Cloud

When a RHEL instance boots in the Unified Kernel Image (UKI) mode and with Secure Boot enabled, the RHEL instance interacts with the cloud service infrastructure in the following sequence:

  1. Initialization: When a RHEL instance boots, the cloud-hosted firmware initially boots and implements the Secure Boot mechanism.
  2. Variable store initialization: The firmware initializes UEFI variables from a variable store, a dedicated storage area for information that firmware needs to manage for the boot process and runtime operations. When the RHEL instance boots for the first time, the store initializes from default values associated with the VM image.
  3. Boot loader: When booted, the firmware loads the first stage boot loader. For the RHEL instance in a x86 UEFI environment, the first stage boot loader is shim. The shim boot loader authenticates and loads the next stage of the boot process and acts as a bridge between UEFI and GRUB.

    1. The shim x86 binary in RHEL is currently signed by the Microsoft Corporation UEFI CA 2011 Microsoft certificate so that the RHEL instance can boot in the Secure Boot enabled mode on various hardware and virtualized platforms where the Allowed Signature database (db) has the default Microsoft certificates.
    2. The shim binary extends the list of trusted certificates with Red Hat Secure Boot CA and optionally, with Machine Owner Key (MOK).
  4. UKI: The shim binary loads the RHEL UKI (the kernel-uki-virt package). The corresponding certificate, Red Hat Secure Boot Signing 504 on the x86_64 architecture, signs the UKI. You can find this certificate in the redhat-sb-certs package. Red Hat Secure Boot CA signs this certificate, so the check succeeds.
  5. UKI add-ons: When you use the UKI cmdline extensions, the RHEL kernel actively checks their signatures against db, MOK, and certificates shipped with shim. This process ensures that either the operating system vendor RHEL or a user has signed the extensions.

When the RHEL kernel boots in the Secure Boot mode, it enters the lockdown mode. After entering lockdown, the RHEL kernel adds the db keys to the .platform keyring and the MOK keys to the .machine keyring. During the kernel build process, the build system works with an ephemeral key, which consists of private and public keys. The build system signs standard RHEL kernel modules, such as kernel-modules-core, kernel-modules, and kernel-modules-extra. After the completion of each kernel build, the private key becomes obsolete to sign third-party modules. You can use certificates from db and MOK for this purpose.

To ensure a secure booting process for a Red Hat Enterprise Linux (RHEL) instance on Amazon Web Services (AWS), configure Secure Boot on a RHEL instance. This instance is launched from a pre-configured Amazon Machine Image (AMI) from the AWS Marketplace.

Prerequisites

  • The RHEL AMI has the uefi-preferred option enabled in boot settings:

    $ aws ec2 describe-images --image-id ami-08d2f096f70b3dd74 --region us-east-2 | grep -E '"ImageId"|"Name"|"BootMode"'
    "Name": "RHEL-9.7.0_HVM-20260303-x86_64-0-Hourly2-GP3",
    "BootMode": "uefi-preferred",
    "ImageId": "ami-08d2f096f70b3dd74",
  • You have installed the following packages on the RHEL instance:

    • awscli2
    • python3
    • openssl
    • efivar
    • keyutils
    • edk2-ovmf
    • python3-virt-firmware

      Warning

      To avoid security issues, generate and keep private keys apart from the current RHEL instance. If Secure Boot secrets are stored on the same instance on which they are used, intruders can gain access to secrets for escalating their privileges. For details on launching an AWS EC2 instance, see Get started with Amazon EC2.

Procedure

  1. Check the platform status of the RHEL Marketplace AMI instance:

    $ sudo mokutil --sb-state
    SecureBoot disabled
    Platform is in Setup Mode

    The setup mode allows updating the Secure Boot UEFI variables within the instance.

  2. Generate a custom_db.cer custom certificate:

    $ openssl req -quiet \
    -newkey rsa:3072 \
    -nodes -keyout custom_db.key \
    -new -x509 -sha256 \
    -days 3650 \
    -subj "/CN=Signature Database key/" \
    --outform DER -out custom_db.cer
  3. Generate UEFI variables file by using the virt-fw-vars utility:

    $ virt-fw-vars --enroll-redhat \
    --add-db-cert OvmfEnrollDefaultKeys custom_db.cer \
    --set-dbx /usr/share/edk2/ovmf/DBX* \
    --output-auth .

    For details, see the virt-fw-vars(1) man page on your system.

  4. Convert UEFI variables to the Extensible Firmware Interface (EFI) Signature List (ESL) format:

    $ for f in PK KEK db dbx; do tail -c +41 $f.auth > $f.esl; done
    Note

    Each GUID is an assigned value and represents an EFI parameter

    • 8be4df61-93ca-11d2-aa0d-00e098032b8c: EFI_GLOBAL_VARIABLE_GUID
    • d719b2cb-3d3a-4596-a3bc-dad00e67656f: EFI_IMAGE_SECURITY_DATABASE_GUID

    The EFI_GLOBAL_VARIABLE_GUID parameter maintains settings of the bootable devices and boot managers, while the EFI_IMAGE_SECURITY_DATABASE_GUID parameter represents the image security database for Secure Boot variables db, dbx, and storage of required keys and certificates.

  5. Transfer the database certificates to the target instance, use the efivar utility to manage UEFI environment variables.

    1. To transfer PK.esl, enter:

      $ sudo efivar -w -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-PK -f PK.esl
    2. To transfer KEK.esl, enter:

      $ sudo efivar -w -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-KEK -f KEK.esl
    3. To transfer db.esl, enter:

      $ sudo efivar -w -n d719b2cb-3d3a-4596-a3bc-dad00e67656f-db -f db.esl
    4. To transfer the dbx.esl UEFI revocation list file for x64 architecture, enter:

      $ sudo efivar -w -n d719b2cb-3d3a-4596-a3bc-dad00e67656f-dbx -f dbx.esl
  6. Reboot the instance from the AWS console.

Verification

  • Verify if Secure Boot is enabled:

    $ sudo mokutil --sb-state
    SecureBoot enabled
  • Use the keyctl utility to verify the kernel keyring for the custom certificate:

    $ sudo keyctl list %:.platform
    7 keys in keyring:
    741159788: ---lswrv     0     0 asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e4f9ae17c55af53
    941772267: ---lswrv     0     0 asymmetric: Red Hat Secure Boot CA 8: e1c6c580aa1e21d585aad9bf20f3929e5ec1f08b
    979739129: ---lswrv     0     0 asymmetric: Red Hat Secure Boot CA 5: cc6fa5e72868ba494e939bbd680b9144769a9f8f
    303712700: ---lswrv     0     0 asymmetric: Signature Database key: 7dff9c7433d40daa6cb2cdbdb4c2b7c93f5252a4
    747313470: ---lswrv     0     0 asymmetric: Microsoft UEFI CA 2023: 81aa6b3244c935bce0d6628af39827421e32497d
    710788326: ---lswrv     0     0 asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed522988a1bd4
       163192: ---lswrv     0     0 asymmetric: Microsoft Corporation: Windows UEFI CA 2023: aefc5fbbbe055d8f8daa585473499417ab5a5272
    ...

To secure boot a Red Hat Enterprise Linux (RHEL) instance on Amazon Web Services (AWS), configure Secure Boot when registering a custom RHEL Amazon Machine Image (AMI). As this AMI consists of pre-stored UEFI variables, instances launched from it use the Secure Boot mechanism during the first boot.

Prerequisites

  • You have created and uploaded an AWS AMI image. For details, see Preparing and uploading AWS AMI.
  • You have installed the following packages:

    • awscli2
    • python3
    • openssl
    • efivar
    • keyutils
    • python3-virt-firmware

Procedure

  1. Generate a custom certificate custom_db.cer:

    $ openssl req -quiet \
    -newkey rsa:3072 \
    -nodes -keyout custom_db.key \
    -new -x509 -sha256 \
    -days 3650 -subj "/CN=Signature Database key/" \
    --outform DER -out custom_db.cer
  2. Use the virt-fw-vars utility to generate the aws_blob.bin binary file from keys, database certificates, and the UEFI variable store:

    $ virt-fw-vars --enroll-redhat \
    --add-db-cert OvmfEnrollDefaultKeys custom_db.cer \
    --set-dbx /usr/share/edk2/ovmf/DBX* \
    --output-aws aws_blob.bin

    The customized blob consists of:

    • PK.cer, KEK.cer, db, and dbx from the edk2-ovmf package
    • The custom_db.cer generated certificate
  3. Use the awscli2 utility to create and register the AMI from a disk snapshot with the required Secure Boot variables:

    $ aws ec2 register-image \
    --name rhel-9-secure-boot \
    --architecture x86_64 \
    --virtualization-type hvm \
    --root-device-name "/dev/sda1" \
    --block-device-mappings "{\"DeviceName\": \"/dev/sda1\",\"Ebs\": {\"SnapshotId\": \"<snap-02d4db3813ff9b98e>\"}}" \
    --ena-support --boot-mode uefi \
    --region eu-central-1 \
    --uefi-data $(cat aws_blob.bin)
    --output json
    {
        "ImageId": "example-amazon-id"
    }
  4. Reboot the instance from the AWS Console.

Verification

  • Verify Secure Boot functionality:

    $ sudo mokutil --sb-state
    SecureBoot enabled
  • Use the keyctl utility to verify the kernel keyring for the custom certificate:

    $ sudo keyctl list %:.platform
    7 keys in keyring:
    741159788: ---lswrv     0     0 asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e4f9ae17c55af53
    941772267: ---lswrv     0     0 asymmetric: Red Hat Secure Boot CA 8: e1c6c580aa1e21d585aad9bf20f3929e5ec1f08b
    979739129: ---lswrv     0     0 asymmetric: Red Hat Secure Boot CA 5: cc6fa5e72868ba494e939bbd680b9144769a9f8f
    303712700: ---lswrv     0     0 asymmetric: Signature Database key: 7dff9c7433d40daa6cb2cdbdb4c2b7c93f5252a4
    747313470: ---lswrv     0     0 asymmetric: Microsoft UEFI CA 2023: 81aa6b3244c935bce0d6628af39827421e32497d
    710788326: ---lswrv     0     0 asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed522988a1bd4
       163192: ---lswrv     0     0 asymmetric: Microsoft Corporation: Windows UEFI CA 2023: aefc5fbbbe055d8f8daa585473499417ab5a5272
    ...

AMD Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP) aims to prevent VM integrity-based attacks and reduce the dangers of memory integrity violations. For the secure boot process, AMD processors offer three hardware-based security mechanisms; Secure Encrypted Virtualization (SEV), SEV Encrypted State (SEV-ES), and SEV Secure Nested Paging (SEV-SNP).

  • SEV: The SEV mechanism encrypts virtual machine (VM) memory to prevent the hypervisor from accessing VM data.
  • SEV-ES: SEV with Encrypted State (SEV-ES) extends SEV by encrypting CPU register states. This mechanism prevents the hypervisor from accessing or modifying VM CPU registers. Despite providing isolation between hypervisor and VM, it is still vulnerable to memory integrity attacks.
  • SEV-SNP: SEV-SNP is an enhancement to SEV-ES that adds memory integrity protection along with VM encryption. This mechanism prevents the hypervisor from modifying page tables to redirect VM memory access, protecting against replay attacks and memory tampering.

    Note

    Before deploying Red Hat Enterprise Linux (RHEL) on a public cloud platform, always check with the corresponding cloud service provider for the support status and certification of the particular RHEL instance type.

7.1. Properties of SEV-SNP

  • Secure Processor: The AMD EPYC processor integrates a Secure Processor (SP) subsystem. AMD SP is a dedicated hardware component to manage keys and encryption operations.
  • Memory Integrity: For managing virtualization and isolation, memory management unit (MMU) utilizes page tables to translate virtual addresses to guest-physical addresses. SEV-SNP uses nested page tables for translating guest-physical addresses to host-physical addresses. Once nested page tables are defined, the hypervisor or host cannot alter page tables to modify the VM into accessing different pages, resulting in protection of memory integrity. SEV-SNP uses this method to offer protection against replay attacks and malicious modifications to VM memory.
  • Memory Encryption: The AMD EPYC processor hides the memory encryption key, which remains hidden from both host and VM.
  • Attestation report for verification: A CPU-generated report about RHEL instance information in an authorized cryptographic format. This process confirms the authenticity and reliability of the initial CPU and memory state of the RHEL instance and AMD processor.

    Note

    Even if a hypervisor creates the primary memory and CPU register state of the VM, they remain hidden and inaccessible to the hypervisor after initialization of that VM.

7.2. Understanding AMD SEV SNP secure boot process

  1. Initialization and measurement: A SEV-SNP enabled hypervisor sets the initial state of a VM. This hypervisor loads firmware binary into the VM memory and sets the initial register state. AMD Secure Processor (SP) measures the initial state of the VM and provides details to verify the initial state of the VM.
  2. Firmware: The VM initiates the UEFI firmware. The firmware might include either stateful or stateless Virtual Trusted Platform Module (vTPM) implementation. Stateful vTPM maintains persistent cryptographic state across VM reboots and migrations, whereas stateless vTPM generates fresh cryptographic state for each VM session without persistence. Virtual Machine Privilege Levels (VMPL) technology isolates vTPM from the guest. VMPL offers hardware-enforced privilege isolation between different VM components and the hypervisor.
  3. vTPM: Depending on your cloud service provider, for stateful vTPM implementation, the UEFI firmware might perform a remote attestation to decrypt the persistent state of vTPM.

    1. The vTPM also measures facts about the boot process such as Secure Boot state, certificates used for signing boot artifacts, UEFI binary hashes, and so on.
  4. Shim: When the UEFI firmware finishes the initialization process, it searches for the extended firmware interface (EFI) system partition. Then, the UEFI firmware verifies and executes the first stage boot loader from there. For RHEL, this is shim. The shim program allows non-Microsoft operating systems to load the second stage boot loader from the EFI system partition.

    1. shim uses a Red Hat certificate to verify the second stage boot loader (grub) or Red Hat Unified Kernel Image (UKI).
    2. grub or UKI unpacks, verifies, and executes Linux kernel and initial RAM filesystem (initramfs), and the kernel command line. This process ensures that the Linux kernel is loaded in a trusted and secured environment.
  5. Initramfs: In initramfs, vTPM information automatically unlocks the encrypted root partition in case of full disk encryption technology.

    1. When the root volume becomes available, initramfs transfers the execution flow to the root volume.
  6. Attestation: The VM tenant gets access to the system and can perform a remote attestation to ensure that the accessed VM is an untampered Confidential Virtual Machine (CVM). Attestation is performed based on information from AMD SP and vTPM. This process confirms the authenticity and reliability of the initial CPU and memory state of the RHEL instance and AMD processor.
  7. TEE: This process creates a Trusted Execution Environment (TEE) to ensure that booting of the VM is in a trusted and secured environment.

AMD Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP) is a security type of the Confidential Virtual Machine (CVM) technology for Red Hat Enterprise Linux (RHEL) on Amazon Web Services (AWS) instances and available only for AMD EPYC processor family. SEV-SNP provides a trusted boot environment so that the entire process becomes secured and protected such that hypervisor and cloud service provider cannot access the data.

Prerequisites

  • You have installed the awscli2, openssh, and openssh-clients packages.
  • You have launched the instance from the list of specified instance types. For details, see supported instance types.

Procedure

  1. Check if SEV-SNP is enabled for the RHEL instance:

    $ aws ec2 describe-instances --instance-ids <example_instance_id> \
    --region <example_region>
    ...
    "CpuOptions": {
    "CoreCount": 2,
    "ThreadsPerCore": 2,
    "AmdSevSnp": "enabled"
    },
    ...
  2. If SEV-SNP is not enabled, get ID of a RHEL Amazon Machine Image (AMI):

    $ aws ec2 describe-images \
    --owners 309956199498 \
    --query 'sort_by(Images, &Name)[].[CreationDate,Name,ImageId]' \*
    --filters "Name=name,Values=RHEL-9" \*
    --region us-east-1 \
    --output table
    Note

    Do not modify the command option --owners 309956199498. This is the account ID for displaying Red Hat images. If you need to list images for AWS GovCloud, use --region us-gov-west-1 and --owners 219670896067.

  3. Launch a RHEL instance with SEV-SNP enabled:

    $ aws ec2 run-instances \
    --image-id <example-rhel-9-ami-id> \
    --instance-type m6a.4xlarge \
    --key-name <example_key_pair_name> \
    --subnet-id <example_subnet_id> \
    --cpu-options AmdSevSnp=enabled

Verification

  • Check kernel logs to verify status of SEV-SNP:

    $ dmesg | grep -i sev
    ...
    [    7.509546] Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP
    [    8.469487] SEV: Using SNP CPUID table, 64 entries present.
    [    9.433348] SEV: SNP guest platform device initialized.
    [   33.314380] sev-guest sev-guest: Initialized SEV guest driver (using vmpck_id 0)
    ...

To ensure that a Red Hat Enterprise Linux (RHEL) instance has a secured boot process from an untrusted storage such as confidential virtual machine (CVM) on a public cloud platform, use Unified Kernel Image (UKI).

8.1. Introduction to Unified Kernel Image

To extend the secure boot protection throughout the entire boot chain, use Unified Kernel Image (UKI).

Components of UKI

Unified Kernel Image (UKI) is a Unified Extensible Firmware Interface (UEFI) Portable Executable (PE) binary for the UEFI firmware environment, which bundles the essential components of an operating system. UKI binary components extend the Secure Boot mechanism with initramfs and the kernel command line. Initramfs is a part of the Linux startup process, while the kernel command line gives you limited access to define parameters. Components are as follows:

  • The .linux section stores the Linux kernel image.
  • The .initrd section stores the initial RAM filesystem initramfs.
  • The .cmdline section stores the kernel command line.
  • Additional sections, such as .sbat.
  • The Red Hat signature.
Features of RHEL UKI with pre-built initramfs
  • Prohibits any malicious agent or component to alter any objects in the boot chain.
  • Due to pre-built initramfs, the user does not need to build its custom initramfs, which results in a faster kernel installation.
  • Provides support for the pre-built initramfs systems as it is similar in all installations such as virtual machine (VMs), containers, or cloud instances.
  • Provides support for the x86_64 architecture.
  • Includes the kernel-uki-virt package.
  • Built for virtual machines and cloud instances.
Limitation of UKI because of the reduced flexibility of the boot process
  • When building the UKI, the operating system vendor creates initramfs. As a consequence, the listed and included kernel modules are static. You can use the systemd system and configuration extensions to address this limitation.
  • The kernel command line parameters are static, which limits the use of parameters for different instance sizes or debugging options.

You can use the UKI command line extensions to overcome this limitation.

8.2. Understanding the UKI secure boot process

To protect your system against unauthorized boot-time modifications, use the secure boot mechanism with Unified Kernel Image (UKI).

When using UKI with secure boot, the system verifies each component in the boot chain to ensure system integrity and prevent malicious code execution.

Procedure

  1. UEFI Firmware: The boot process starts from the Unified Extensible Firmware Interface (UEFI) firmware. For boot, Red Hat Enterprise Linux (RHEL) UKI requires UEFI firmware, because legacy basic input/output system (BIOS) firmware is not supported.
  2. Shim boot loader: Use the shim boot loader for booting rather than directly booting the UKI from the UEFI firmware. shim includes additional security mechanisms such as Machine Owner Key (MOK) and Secure Boot Advanced Targeting (SBAT).
  3. Signature verification (Secure Boot UEFI mechanism): During boot, shim reads the UKI binary and the secure boot UEFI mechanism verifies the signature of UKI against trusted keys stored in the Secure Boot Allowed Signature Database (db) of the system, the MOK database, and the built-in database of the shim binary. If the signature key is valid, the verification passes.
  4. SBAT verification: Immediately after signature verification, the shim boot loader verifies the SBAT rules at startup.

    During SBAT verification, the system compares generation numbers, for components such as systemd.rhel or linux.rhel, embedded in the UKI by using the .sbat section against values in the shim boot loader. If the generation number for a component in the shim is higher than the generation number in the UKI, the binary gets automatically discarded, even if it was signed by a trusted key.

    Note that the generation number is a version identifier for UEFI applications, such as shim and grub.

  5. Unpacking and Execution: If verification passes, control passes from shim to the systemd-stub code in the UKI to continue the boot process.
  6. systemd-stub add-ons: During execution, systemd-stub unpacks and extracts the contents of the .cmdline section, the plain text kernel command line, and the .initrd section, the temporary root file system, for the boot process.

    Note that systemd-stub reads UKI add-ons and verifies their signatures to safely extend the kernel command line of UKI by appending the .cmdline content from add-ons. systemd-stub reads add-ons from two locations:

    • Global (UKI-independent) add-ons from the /loader/addons/ directory on the Extensible Firmware Interface (EFI) System Partition (ESP).
    • Per-UKI add-ons from the /EFI/Linux/<UKI-name>.extra.d/ directory on the ESP.
  7. Control passes from systemd-stub to the Linux kernel and the operating system boot process continues.

    From this point, secure boot with UKI mechanisms follows the standard kernel boot process.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top