Deploying RHEL 8 on Amazon Web Services
Obtaining RHEL system images and creating RHEL instances on AWS
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We are committed to providing high-quality documentation and value your feedback. To help us improve, you can submit suggestions or report errors through the Red Hat Jira tracking system.
Procedure
Log in to the Jira website.
If you do not have an account, select the option to create one.
- Click Create in the top navigation bar.
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Introducing RHEL on public cloud platforms Copy linkLink copied to clipboard!
Public cloud platforms offer computing resources as a service. Instead of using on-premise hardware, you can run your IT workloads, including Red Hat Enterprise Linux (RHEL) systems, as public cloud instances.
1.1. Benefits of using RHEL in a public cloud Copy linkLink copied to clipboard!
Red Hat Enterprise Linux (RHEL) cloud instances on public cloud platforms have these benefits over on-premise RHEL systems or virtual machines (VMs):
- Flexible and fine-grained allocation of resources
A RHEL cloud instance runs as a VM on a cloud platform. The platform is a cluster of remote servers that the cloud service provider maintains. You can select hardware resources at the software level. For example, you can select a CPU type or storage setup.
Unlike a local RHEL system, you are not limited by what your physical host can do. Instead, you can select from many features that the cloud provider offers.
- Space and cost efficiency
You do not need to own on-premise servers to host cloud workloads. This removes the space, power, and maintenance needs for physical hardware.
On public cloud platforms, you pay the cloud provider for cloud instance usage. Costs depend on the hardware you use and how long you use it. You can control costs to meet your needs.
- Software-controlled configurations
You can save a cloud instance configuration as data on the cloud platform and control it with software. With this configuration, you can create, remove, clone, or migrate instances easily. You can also manage a cloud instance remotely through a cloud provider console. The instance connects to remote storage by default.
You can back up a cloud instance as a snapshot at any time. You can then load the snapshot to restore the instance to the saved state.
- Separation from the host and software compatibility
Unlike a local VM, a RHEL cloud instance uses Kernel-based Virtual Machine (KVM) virtualization. The guest kernel is separate from the host operating system. It is also separate from the client system you use to connect to the instance.
You can install any operating system on the cloud instance. On a RHEL public cloud instance, you can run RHEL apps you cannot use on your local operating system.
If the instance operating system becomes unstable or compromised, it does not affect your client system.
1.2. Public cloud use cases for RHEL Copy linkLink copied to clipboard!
Deploying applications on a public cloud offers many benefits, but might not be the most efficient solution for every scenario. If you are evaluating the migration of your Red Hat Enterprise Linux (RHEL) deployments to the public cloud, consider whether your use case will benefit from the advantages of the public cloud.
Beneficial use cases
Deploying public cloud instances is effective for increasing and decreasing the active computing power of your deployments, also known as scaling up and scaling down. Therefore, consider using RHEL on public cloud for the following scenarios:
- Clusters with high peak workloads and low general performance requirements. Scaling up and down based on your demands can be efficient in terms of resource costs.
- Setting up or expanding your clusters to a public cloud to avoid high upfront costs of setting up local servers.
- Cloud instances are agnostic of the local environment. Therefore, you can use them for backup and disaster recovery.
Potentially problematic use cases
- You are running an existing environment that is not flexible to migrate to a public cloud. Therefore, customizing a cloud instance to fit the specific needs of an existing deployment might not be suitable for your use case and compared to your current host platform.
- You are operating on a tight resource budget. Maintaining your deployment in a local data center typically provides less flexibility but more control over the maximum resource costs than the public cloud.
Next steps
1.3. Frequent concerns when migrating to a public cloud Copy linkLink copied to clipboard!
Moving your RHEL workloads from a local environment to a public cloud platform might raise concerns about the changes involved. The following are the most commonly asked questions.
Will my RHEL work differently as a cloud instance than as a local virtual machine?
In most respects, RHEL instances on a public cloud platform work the same as RHEL virtual machines on a local host, such as an on-premises server. Notable exceptions include:
- Instead of private orchestration interfaces, public cloud instances use provider-specific console interfaces for managing your cloud resources.
- Certain features, such as nested virtualization, may not work correctly. If a specific feature is critical for your deployment, check the feature’s compatibility in advance with your chosen public cloud provider.
Will my data stay safe in a public cloud as opposed to a local server?
The data in your RHEL cloud instances is in your ownership, and your public cloud provider does not have any access to it. In addition, major cloud providers support data encryption in transit, which improves the security of data when migrating your virtual machines to the public cloud.
The general security of your RHEL public cloud instances is managed as follows:
- Your public cloud provider is responsible for the security of the cloud hypervisor
- Red Hat provides the security features of the RHEL guest operating systems in your instances
- You manage the specific security settings and practices in your cloud infrastructure
What effect does my geographic region have on the functionality of RHEL public cloud instances?
You can use RHEL instances on a public cloud platform regardless of your geographical location. Therefore, you can run your instances in the same region as your on-premises server.
However, hosting your instances in a physically distant region might cause high latency when operating them. In addition, depending on the public cloud provider, certain regions may provide additional features or be more cost-efficient. Before creating your RHEL instances, review the properties of the hosting regions available for your chosen cloud provider.
1.4. Obtaining RHEL for public cloud deployments Copy linkLink copied to clipboard!
To deploy a Red Hat Enterprise Linux (RHEL) system in a public cloud environment, you need to:
Select the optimal cloud provider for your use case, based on your requirements and the current offer on the market. The cloud providers currently certified for running RHEL instances are:
- Amazon Web Services (AWS)
- Google Cloud
- Note
This document specifically talks about deploying RHEL on AWS.
- Create a RHEL cloud instance on your chosen cloud platform. For more information, see Methods for creating RHEL cloud instances.
- To keep your RHEL deployment up-to-date, use Red Hat Update Infrastructure (RHUI).
1.5. Methods for creating RHEL cloud instances Copy linkLink copied to clipboard!
To deploy a RHEL instance on a public cloud platform, you can use one of the following methods:
| Create a system image of RHEL and import it to the cloud platform.
|
| Purchase a RHEL instance directly from the cloud provider marketplace.
|
Chapter 2. Creating and uploading AWS AMI images Copy linkLink copied to clipboard!
To use your customized RHEL system image in the Amazon Web Services (AWS) cloud, create the system image with Image Builder by using the respective output type, configure your system for uploading the image, and upload the image to your AWS account.
2.1. Preparing to manually upload AWS AMI images Copy linkLink copied to clipboard!
Before uploading an AWS AMI image, you must configure a system for uploading the images.
Prerequisites
- You must have an Access Key ID configured in the AWS IAM account manager.
- You must have a writable S3 bucket prepared. See Creating S3 bucket.
Procedure
Install Python 3 and the
piptool:# yum install python3 python3-pipInstall the AWS command-line tools with
pip:# pip3 install awscliSet your profile. The terminal prompts you to provide your credentials, region and output format:
$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:Define a name for your bucket and create a bucket:
$ BUCKET=bucketname $ aws s3 mb s3://$BUCKETReplace
bucketnamewith the actual bucket name. It must be a globally unique name. As a result, your bucket is created.To grant permission to access the S3 bucket, create a
vmimportS3 Role in the AWS Identity and Access Management (IAM), if you have not already done so in the past:Create a
trust-policy.jsonfile with the trust policy configuration, in the JSON format. For example:{ "Version": "2022-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:Externalid": "vmimport" } } }] }Create a
role-policy.jsonfile with the role policy configuration, in the JSON format. For example:{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": ["s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket"], "Resource": ["arn:aws:s3:::%s", "arn:aws:s3:::%s/"] }, { "Effect": "Allow", "Action": ["ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe"], "Resource": "*" }] } $BUCKET $BUCKETCreate a role for your Amazon Web Services account, by using the
trust-policy.jsonfile:$ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.jsonEmbed an inline policy document, by using the
role-policy.jsonfile:$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json
2.2. Manually uploading an AMI image to AWS by using the CLI Copy linkLink copied to clipboard!
You can use RHEL image builder to build ami images and manually upload them directly to Amazon AWS Cloud service provider, by using the CLI.
Prerequisites
-
You have an
Access Key IDconfigured in the AWS IAM account manager. - You must have a writable S3 bucket prepared. See Creating S3 bucket.
- You have a defined blueprint.
Procedure
Using the text editor, create a configuration file with the following content:
provider = "aws" [settings] accessKeyID = "AWS_ACCESS_KEY_ID" secretAccessKey = "AWS_SECRET_ACCESS_KEY" bucket = "AWS_BUCKET" region = "AWS_REGION" key = "IMAGE_KEY"Replace values in the fields with your credentials for
accessKeyID,secretAccessKey,bucket, andregion. TheIMAGE_KEYvalue is the name of your VM Image to be uploaded to EC2.- Save the file as CONFIGURATION-FILE.toml and close the text editor.
Start the compose to upload it to AWS:
# composer-cli compose start blueprint-name image-type image-key configuration-file.tomlReplace:
- blueprint-name with the name of the blueprint you created
-
image-type with the
amiimage type. - image-key with the name of your VM Image to be uploaded to EC2.
configuration-file.toml with the name of the configuration file of the cloud provider.
NoteYou must have the correct AWS Identity and Access Management (IAM) settings for the bucket you are going to send your customized image to. You have to set up a policy to your bucket before you are able to upload images to it.
Check the status of the image build:
# composer-cli compose statusAfter the image upload process is complete, you can see the "FINISHED" status.
Verification
To confirm that the image upload was successful:
-
Access EC2 on the menu and select the correct region in the AWS console. The image must have the
availablestatus, to indicate that it was successfully uploaded. - On the dashboard, select your image and click .
2.3. Creating and automatically uploading images to the AWS Cloud AMI Copy linkLink copied to clipboard!
You can create a (.raw) image by using RHEL image builder, and choose to check the Upload to AWS checkbox to automatically push the output image that you create directly to the Amazon AWS Cloud AMI service provider.
Prerequisites
-
You must have
rootorwheelgroup user access to the system. - You have opened the RHEL image builder interface of the RHEL web console in a browser.
- You have created a blueprint. See Creating a blueprint in the web console interface.
- You must have an Access Key ID configured in the AWS IAM account manager.
- You must have a writable S3 bucket prepared.
Procedure
- In the RHEL image builder dashboard, click the blueprint name that you previously created.
- Select the tab .
Click to create your customized image.
The Create Image window opens.
-
From the Type drop-down menu list, select
Amazon Machine Image Disk (.raw). - Check the Upload to AWS checkbox to upload your image to the AWS Cloud and click .
To authenticate your access to AWS, type your
AWS access key IDandAWS secret access keyin the corresponding fields. Click .NoteYou can view your AWS secret access key only when you create a new Access Key ID. If you do not know your Secret Key, generate a new Access Key ID.
-
Type the name of the image in the
Image namefield, type the Amazon bucket name in theAmazon S3 bucket namefield and type theAWS regionfield for the bucket you are going to add your customized image to. Click . Review the information and click .
Optionally, click to modify any incorrect detail.
NoteYou must have the correct IAM settings for the bucket you are going to send your customized image. This procedure uses the IAM Import and Export, so you have to set up a policy to your bucket before you are able to upload images to it. For more information, see Required Permissions for IAM Users.
-
From the Type drop-down menu list, select
A pop-up on the upper right informs you of the saving progress. It also informs that the image creation has been initiated, the progress of this image creation and the subsequent upload to the AWS Cloud.
After the process is complete, you can see the Image build complete status.
In a browser, access Service→EC2.
-
On the AWS console dashboard menu, choose the correct region. The image must have the
Availablestatus, to indicate that it is uploaded. - On the AWS dashboard, select your image and click .
-
On the AWS console dashboard menu, choose the correct region. The image must have the
- A new window opens. Choose an instance type according to the resources you need to start your image. Click .
- Review your instance start details. You can edit each section if you need to make any changes. Click
Before you start the instance, select a public key to access it.
You can either use the key pair you already have or you can create a new key pair.
Follow the next steps to create a new key pair in EC2 and attach it to the new instance.
- From the drop-down menu list, select Create a new key pair.
- Enter the name to the new key pair. It generates a new key pair.
- Click Download Key Pair to save the new key pair on your local system.
Then, you can click to start your instance.
You can check the status of the instance, which displays as Initializing.
- After the instance status is running, the button becomes available.
Click . A window appears with instructions on how to connect by using SSH.
- Select A standalone SSH client as the preferred connection method to and open a terminal.
In the location you store your private key, ensure that your key is publicly viewable for SSH to work. To do so, run the command:
$ chmod 400 <_your-instance-name.pem_>Connect to your instance by using its Public DNS:
$ ssh -i <_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>Type
yesto confirm that you want to continue connecting.As a result, you are connected to your instance over SSH.
Verification
- Check if you are able to perform any action while connected to your instance by using SSH.
Chapter 3. Deploying a Red Hat Enterprise Linux image as an EC2 instance on Amazon Web Services Copy linkLink copied to clipboard!
To set up a High Availability (HA) deployment of RHEL on Amazon Web Services (AWS), you can deploy EC2 instances of RHEL to a cluster on AWS.
While you can create a custom VM from an ISO image, Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. With Image Builder, you can create and upload an Amazon Machine Image (AMI) in the ami format. See Composing a Customized RHEL System Image for more information.
For a list of Red Hat products that you can use securely on AWS, see Red Hat on Amazon Web Services.
Prerequisites
- Sign up for a Red Hat Customer Portal account.
- Sign up for AWS and set up your AWS resources. See Setting Up with Amazon EC2 for more information.
3.1. Red Hat Enterprise Linux image options on AWS Copy linkLink copied to clipboard!
The following table lists image choices and notes the differences in the image options.
| Image option | Subscriptions | Sample scenario | Considerations |
|---|---|---|---|
| Deploy a Red Hat Gold Image. | Use your existing Red Hat subscriptions. | Select a Red Hat Gold Image on AWS. For details on Gold Images and how to access them on Azure, see the Red Hat Cloud Access Reference Guide. | The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for Cloud Access images. |
| Deploy a custom image that you move to AWS. | Use your existing Red Hat subscriptions. | Upload your custom image, and attach your subscriptions. | The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for custom RHEL images. |
| Deploy an existing Amazon image that includes RHEL. | The AWS EC2 images include a Red Hat product. | Select a RHEL image when you launch an instance on the AWS Management Console, or choose an image from the AWS Marketplace. | You pay Amazon on an hourly basis according to the pay-as-you-go (PAYG) model. This is also known as an on-demand image. Amazon provides support for on-demand images. Red Hat provides updates to the images. AWS makes the updates available through the Red Hat Update Infrastructure (RHUI). |
To convert an on-demand, license-included EC2 instance to a bring-your-own-license (BYOL) EC2 instance of RHEL, see Convert a license type for Linux in License Manager.
You can create a custom image for AWS by using RHEL Image Builder. See Composing a Customized RHEL System Image for more information.
3.2. Understanding base images Copy linkLink copied to clipboard!
To create a base VM from an ISO image, you can use preconfigured base images and their configuration settings.
3.2.1. Using a custom base image Copy linkLink copied to clipboard!
To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image.
3.2.2. Virtual machine configuration settings Copy linkLink copied to clipboard!
Cloud VMs must have the following configuration settings.
| Setting | Recommendation |
|---|---|
| ssh | ssh must be enabled to provide remote access to your VMs. |
| dhcp | The primary virtual adapter should be configured for dhcp. |
3.3. Creating a base VM from an ISO image Copy linkLink copied to clipboard!
To create a RHEL 8 base image from an ISO image, enable your host machine for virtualization and create a RHEL virtual machine (VM).
Prerequisites
- Virtualization is enabled on your host machine.
-
You have downloaded the latest ISO image from the Red Hat Customer Portal and moved the image to the
/var/lib/libvirt/imagesdirectory.
3.3.1. Creating a base image from an ISO image Copy linkLink copied to clipboard!
The following procedure lists the steps and initial configuration requirements for creating a custom ISO image. Once you have configured the image, you can use the image as a template for creating additional VM instances.
Prerequisites
- Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
Procedure
Create and start a basic Red Hat Enterprise Linux (RHEL) VM. For instructions, see Creating virtual machines.
Set the default memory and CPUs to the capacity you need for the VM and the virtual network interface to virtio.
For example, the following command creates a kvmtest VM by using the
rhel-8.0-x86_64-kvm.qcow2image:# virt-install \ --name kvmtest --memory 2048 --vcpus 2 \ --disk rhel-8.0-x86_64-kvm.qcow2,bus=virtio \ --import --os-variant=rhel8.0If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:
- Do not check Immediately Start VM.
- Change your Memory size to your preferred settings.
- Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
Review the following additional installation selection and modifications.
- Select Minimal Install with the standard RHEL option.
For Installation Destination, select Custom Storage Configuration. Use the following configuration information to make your selections.
- Ensure allocation of at least 500 MB and maximum 1 GB or more for /boot.
-
In the filesystem section, use the extended File System (
XFS),ext4, orext3for both boot and root partitions.
- On the Installation Summary screen, select Network and hostname. Switch Ethernet to ON.
When the installation starts:
-
Create a
rootpassword. - Create an administrative user account.
-
Create a
- After installation is complete, reboot the VM.
-
Log in to the
rootaccount to configure the VM.
3.4. Uploading the Red Hat Enterprise Linux image to AWS Copy linkLink copied to clipboard!
To be able to run a RHEL instance on Amazon Web Services (AWS), you must first upload your RHEL image to AWS.
3.4.1. Installing the AWS CLI Copy linkLink copied to clipboard!
Many of the procedures required to manage HA clusters in AWS include using the AWS CLI.
Prerequisites
- You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI.
Procedure
Install the AWS command line tools by using the
yumcommand.# yum install awscliUse the
aws --versioncommand to verify that you installed the AWS CLI.$ aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77Configure the AWS command line client according to your AWS access details.
$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:
3.4.2. Creating an S3 bucket Copy linkLink copied to clipboard!
Importing to AWS requires an Amazon S3 bucket. An Amazon S3 bucket is an Amazon resource where you store objects. As part of the process for uploading your image, you need to create an S3 bucket and then move your image to the bucket.
Procedure
- Launch the Amazon S3 Console.
- Click Create Bucket. The Create Bucket dialog appears.
In the Name and region view:
- Enter a Bucket name.
- Enter a Region.
- Click Next.
- In the Configure options view, select the desired options and click Next.
- In the Set permissions view, change or accept the default options and click Next.
- Review your bucket configuration.
Click Create bucket.
NoteAlternatively, you can use the AWS CLI to create a bucket. For example, the
aws s3 mb s3://my-new-bucketcommand creates an S3 bucket namedmy-new-bucket. See the AWS CLI Command Reference for more information about thembcommand.
3.4.3. Creating the vmimport role Copy linkLink copied to clipboard!
To be able to import a RHEL virtual machine (VM) to Amazon Web Services (AWS) by using the VM Import service, you need to create the vmimport role.
For more information, see Importing a VM as an image using VM Import/Export in the Amazon documentation.
Procedure
Create a file named
trust-policy.jsonand include the following policy. Save the file on your system and note its location.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals":{ "sts:Externalid": "vmimport" } } } ] }Use the
create rolecommand to create thevmimportrole. Specify the full path to the location of thetrust-policy.jsonfile. Prefixfile://to the path. For example:$ aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.jsonCreate a file named
role-policy.jsonand include the following policy. Replaces3-bucket-namewith the name of your S3 bucket.{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::s3-bucket-name", "arn:aws:s3:::s3-bucket-name/*" ] }, { "Effect":"Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource":"*" } ] }Use the
put-role-policycommand to attach the policy to the role you created. Specify the full path of therole-policy.jsonfile. For example:$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.json
3.4.4. Converting and pushing your image to S3 Copy linkLink copied to clipboard!
By using the qemu-img command, you can convert your image, so that you can push it to S3. The samples are representative; they convert an image formatted in the qcow2 file format to raw format. Amazon accepts images in OVA, VHD, VHDX, VMDK, and raw formats. See How VM Import/Export Works for more information about image formats that Amazon accepts.
Procedure
Run the
qemu-imgcommand to convert your image. For example:# qemu-img convert -f qcow2 -O raw rhel-8.0-sample.qcow2 rhel-8.0-sample.rawPush the image to S3.
$ aws s3 cp rhel-8.0-sample.raw s3://s3-bucket-nameNoteThis procedure could take a few minutes. After completion, you can check that your image uploaded successfully to your S3 bucket by using the AWS S3 Console.
3.4.5. Importing your image as a snapshot Copy linkLink copied to clipboard!
To launch a RHEL instance in the Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you must first upload a snapshot of your RHEL system image to EC2.
Procedure
Create a file to specify a bucket and path for your image. Name the file
containers.json. In the sample that follows, replaces3-bucket-namewith your bucket name ands3-keywith your key. You can get the key for the image by using the Amazon S3 Console.{ "Description": "rhel-8.0-sample.raw", "Format": "raw", "UserBucket": { "S3Bucket": "s3-bucket-name", "S3Key": "s3-key" } }Import the image as a snapshot. This example uses a public Amazon S3 file; you can use the Amazon S3 Console to change permissions settings on your bucket.
$ aws ec2 import-snapshot --disk-container file://containers.jsonThe terminal displays a message such as the following. Note the
ImportTaskIDwithin the message.{ "SnapshotTaskDetail": { "Status": "active", "Format": "RAW", "DiskImageSize": 0.0, "UserBucket": { "S3Bucket": "s3-bucket-name", "S3Key": "rhel-8.0-sample.raw" }, "Progress": "3", "StatusMessage": "pending" }, "ImportTaskId": "import-snap-06cea01fa0f1166a8" }Track the progress of the import by using the
describe-import-snapshot-taskscommand. Include theImportTaskID.$ aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8The returned message shows the current status of the task. When complete,
Statusshowscompleted. Within the status, note the snapshot ID.
3.4.6. Creating an AMI from the uploaded snapshot Copy linkLink copied to clipboard!
To launch a RHEL instance in Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you can use a RHEL system snapshot that you previously uploaded.
Procedure
- Go to the AWS EC2 Dashboard.
- Under Elastic Block Store, select Snapshots.
-
Search for your snapshot ID (for example,
snap-0e718930bd72bcda0). - Right-click on the snapshot and select Create image.
- Name your image.
- Under Virtualization type, choose Hardware-assisted virtualization.
- Click Create. In the note regarding image creation, there is a link to your image.
Click on the image link. Your image shows up under Images>AMIs.
NoteAlternatively, you can use the AWS CLI
register-imagecommand to create an AMI from a snapshot. See register-image for more information. An example follows.$ aws ec2 register-image \ --name "myimagename" --description "myimagedescription" --architecture x86_64 \ --virtualization-type hvm --root-device-name "/dev/sda1" --ena-support \ --block-device-mappings "{\"DeviceName\": \"/dev/sda1\",\"Ebs\": {\"SnapshotId\": \"snap-0ce7f009b69ab274d\"}}"You must specify the root device volume
/dev/sda1as yourroot-device-name. For conceptual information about device mapping for AWS, see Example block device mapping.
3.4.7. Launching an instance from the AMI Copy linkLink copied to clipboard!
To launch and configure an Amazon Elastic Compute Cloud (EC2) instance, use an Amazon Machine Image (AMI).
Procedure
- From the AWS EC2 Dashboard, select Images and then AMIs.
- Right-click on your image and select Launch.
Choose an Instance Type that meets or exceeds the requirements of your workload.
See Amazon EC2 Instance Types for information about instance types.
Click Next: Configure Instance Details.
- Enter the Number of instances you want to create.
- For Network, select the VPC you created when setting up your AWS environment. Select a subnet for the instance or create a new subnet.
Select Enable for Auto-assign Public IP.
NoteThese are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements.
- Click Next: Add Storage. Verify that the default storage is sufficient.
Click Next: Add Tags.
NoteTags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging.
- Click Next: Configure Security Group. Select the security group you created when setting up your AWS environment.
- Click Review and Launch. Verify your selections.
Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when setting up your AWS environment.
NoteVerify that the permissions for your private key are correct. Use the command options
chmod 400 <keyname>.pemto change the permissions, if necessary.- Click Launch Instances.
Click View Instances. You can name the instance(s).
You can now launch an SSH session to your instance(s) by selecting an instance and clicking Connect. Use the example provided for A standalone SSH client.
NoteAlternatively, you can launch an instance by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.
3.4.8. Attaching Red Hat subscriptions Copy linkLink copied to clipboard!
Using the subscription-manager command, you can register and attach your Red Hat subscription to a RHEL instance.
Prerequisites
- You must have enabled your subscriptions.
Procedure
Register your system.
# subscription-manager registerAttach your subscriptions.
- You can use an activation key to attach subscriptions. See Creating Red Hat Customer Portal Activation Keys for more information.
- Also, you can manually attach a subscription by using the ID of subscription pool (Pool ID). See Attaching a host-based subscription to hypervisors.
Optional: To collect various system metrics about the instance in the Red Hat Hybrid Cloud Console, you can register the instance with Red Hat Lightspeed.
# insights-client register --display-name <display_name_value>For information about further configuration of Red Hat Lightspeed, see Client Configuration Guide for Red Hat Lightspeed.
3.4.9. Setting up automatic registration on AWS Gold Images Copy linkLink copied to clipboard!
To deploy Red Hat Enterprise Linux (RHEL) virtual machines (VMs) on Amazon Web Services (AWS), you can set up RHEL Gold Images to automatically register with the Red Hat Subscription Manager (RHSM).
Prerequisites
You have downloaded the latest RHEL Gold Image for AWS. For instructions, see Using Gold Images on AWS.
NoteAt a time, you can only attach an AWS account to a single Red Hat account. Therefore, ensure no other users require access to the AWS account before attaching it to your Red Hat one.
Procedure
- Upload the Gold Image to AWS. For instructions, see Uploading the Red Hat Enterprise Linux image to AWS.
- Create VMs by using the uploaded image. They will be automatically subscribed with RHSM.
Verification
In a RHEL VM created using the above instructions, verify the system is registered with RHSM by executing the
subscription-manager identitycommand. On a successfully registered system, this displays the UUID of the system. For example:# subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056
Chapter 4. Configuring a Red Hat High Availability cluster on AWS Copy linkLink copied to clipboard!
To redistribute workloads automatically in case of node failure, you can create Red Hat High Availability (HA) clusters on Amazon Web Services (AWS). Even on AWS, you can host these HA clusters.
Creating RHEL HA clusters on AWS is similar to creating HA clusters in non-cloud environments. For details on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS.
4.1. Benefits of using high-availability clusters on public cloud platforms Copy linkLink copied to clipboard!
A high-availability (HA) cluster is a set of computers, also known as nodes, linked together to run a specific workload. The purpose of HA clusters is to offer redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes. No noticeable downtime occurs in the services that are running on the cluster.
You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:
- Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
- Scalability: You can start additional nodes when demand is high and stop them when demand is low.
- Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
- Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.
To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.
4.2. Creating the AWS Access Key and AWS Secret Access Key Copy linkLink copied to clipboard!
Before installing the AWS CLI, you must create an AWS Access Key and AWS Secret Access Key. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster.
Prerequisites
- You have a Red Hat Customer Portal account.
- You have created an AWS account and set up AWS resources. See Setting Up with Amazon EC2 for more information.
- Your IAM user account must have Programmatic access. See Setting up the AWS Environment for more information.
Procedure
- Launch the AWS Console.
- Click on your AWS Account ID to display the drop-down menu and select My Security Credentials.
- Click Users.
- Select the user and open the Summary screen.
- Click the Security credentials tab.
- Click Create access key.
-
Download the
.csvfile (or save both keys). You need to enter these keys when creating the fencing device.
4.3. Creating an HA EC2 instance Copy linkLink copied to clipboard!
To ensure High Availability (HA) for your Red Hat Enterprise Linux (RHEL) cluster nodes and applications in Amazon Web Services (AWS), you can create HA EC2 instances configured as cluster nodes.
For details about obtaining RHEL images, see Image options on AWS.
Prerequisites
- You have a Red Hat Customer Portal account.
- You have created an AWS account and set up AWS resources. See Setting Up with Amazon EC2 for more information.
Procedure
- From the AWS EC2 Dashboard, select Images and then AMIs.
- Right-click the image you want to use and select Launch.
Choose an Instance Type that meets or exceeds the requirements of your workload. Depending on your HA application, each instance requires different capacity.
See Amazon EC2 Instance Types for information about instance types.
Click Next: Configure Instance Details.
Enter the Number of instances you want to create for the cluster. This example procedure uses three cluster nodes.
NoteDo not launch into an Auto Scaling Group.
- For Network, select the virtual private cloud (VPC) you created in Set up the AWS environment. Select the subnet for the instance to create a new subnet.
Select Enable for Auto-assign Public IP. These are the minimum selections you need to make for Configure Instance Details. Depending on your specific HA application, you can make additional selections.
NoteThese are the minimum configuration options necessary to create a basic instance. Review additional options based on your HA application requirements.
- Click Next: Add Storage and verify that you have required storage for your HA application. You do not need to change these settings, unless your HA application requires other storage options.
- Click Next: Configure Security Group. Select the existing security group you created in Setting up the AWS environment.
- Click Review and Launch and verify your selections.
- Click Launch. Select an existing key pair or create a new key pair. For selecting a key pair, see Setting up the AWS environment.
- Click Launch Instances.
Click View Instances. You can name the instance(s).
NoteAlso, you can launch instances by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.
4.4. Configuring the private key Copy linkLink copied to clipboard!
Before using the the private SSH key file (.pem) for SSH communication, you need to configure permissions of the private key.
Prerequisites
- Sign up for a Red Hat Customer Portal account.
- Sign up for AWS and set up your AWS resources. See Setting Up with Amazon EC2 for more information.
Procedure
-
Move the key file from the
Downloadsdirectory to yourHomedirectory or to your~/.sshdirectory. Change the permissions of the key file so that only the root user can read it:
# chmod 400 KeyName.pem
4.5. Connecting to an EC2 instance Copy linkLink copied to clipboard!
Using the AWS Console on all nodes, you can connect to an EC2 instance.
Prerequisites
- You have a Red Hat Customer Portal account.
- You have created an AWS account and set up AWS resources. See Setting Up with Amazon EC2 for more information.
Procedure
- Launch the AWS Console and select the EC2 instance.
- Click Connect and select A standalone SSH client.
-
From your SSH terminal session, connect to the instance by using the AWS example provided in the pop-up window. Add the correct path to your
KeyName.pemfile if the path is not shown in the example.
4.6. Installing the High Availability packages and agents Copy linkLink copied to clipboard!
Before configuring a Red Hat High Availability cluster on AWS, you must install the High Availability packages and agents on each of the nodes.
Prerequisites
- You have a Red Hat Customer Portal account.
- You have created an AWS account and set up AWS resources. See Setting Up with Amazon EC2 for more information.
Procedure
Remove the AWS Red Hat Update Infrastructure (RHUI) client.
$ sudo -i # yum -y remove rh-amazon-rhui-client*Register the VM with Red Hat.
# subscription-manager registerDisable all repositories.
# subscription-manager repos --disable=*Enable the RHEL 8 Server HA repositories.
# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpmsUpdate the RHEL AWS instance.
# yum update -yInstall the Red Hat High Availability Add-On software packages, along with the AWS fencing agent from the High Availability channel.
# yum install pcs pacemaker fence-agents-awsThe user
haclusterwas created during thepcsandpacemakerinstallation in the previous step. Create a password forhaclusteron all cluster nodes. Use the same password for all nodes.# passwd haclusterAdd the
high availabilityservice to the RHEL Firewall iffirewalld.serviceis installed.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reloadStart the
pcsservice and enable it to start on boot.# systemctl start pcsd.service # systemctl enable pcsd.service-
Edit
/etc/hostsand add RHEL host names and internal IP addresses. For more information, see the Red Hat Knowledgebase solution How should the /etc/hosts file be set up on RHEL cluster nodes?.
Verification
Ensure the
pcsservice is running.# systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-03-01 14:53:28 UTC; 28min ago Docs: man:pcsd(8) man:pcs(8) Main PID: 5437 (pcsd) CGroup: /system.slice/pcsd.service └─5437 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null & Mar 01 14:53:27 ip-10-0-0-48.ec2.internal systemd[1]: Starting PCS GUI and remote configuration interface… Mar 01 14:53:28 ip-10-0-0-48.ec2.internal systemd[1]: Started PCS GUI and remote configuration interface.
4.7. Creating a cluster Copy linkLink copied to clipboard!
Create a Red Hat High Availability cluster on a public cloud platform by configuring and initializing the cluster nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
hacluster. In the command, specify the name of each node in the cluster.# pcs host auth <hostname1> <hostname2> <hostname3>Example:
[root@node01 clouduser]# pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: AuthorizedCreate the cluster.
# pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>Example:
[root@node01 clouduser]# pcs cluster setup new_cluster node01 node02 node03 [...] Synchronizing pcsd certificates on nodes node01, node02, node03... node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates... node02: Success node03: Success node01: Success
Verification
Enable the cluster.
[root@node01 clouduser]# pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster EnabledStart the cluster.
[root@node01 clouduser]# pcs cluster start --all node02: Starting Cluster... node03: Starting Cluster... node01: Starting Cluster...
4.8. Configuring fencing on a RHEL AWS cluster Copy linkLink copied to clipboard!
Fencing configuration automatically isolates a malfunctioning node on your Red Hat Enterprise Linux (RHEL) Amazon Web Services (AWS) cluster to prevent the node from compromising functionality and consuming the resources of the cluster.
To configure fencing on an AWS cluster, use one of the following methods:
- A standard procedure for default configuration.
- An alternate configuration procedure for more advanced configuration, focused on automation.
4.8.1. Configuring fencing with default settings Copy linkLink copied to clipboard!
Fencing isolates malfunctioned or unresponsive nodes for data integrity and cluster availability by using Amazon Web Services (AWS) resources and cluster management tools for automated node management. A standard approach for configuring fencing with default settings in a Red Hat Enterprise Linux (RHEL) high availability cluster on Amazon Web Services (AWS).
Prerequisites
-
You have installed the
resource-agentspackage on nodes to enable thefence_awsfencing agent in the cluster. - You have set up your AWS Access Key and AWS Secret Access Key. See link: Creating the AWS Access Key and AWS Secret Access Key for more information.
Procedure
Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information.
# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)Example:
[root@ip-10-0-0-48 ~]# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6Enter the following command to configure the fence device. Use the
pcmk_host_mapcommand to map the RHEL hostname to the Instance ID. Use the AWS Access Key and AWS Secret Access Key that you earlier set up.# pcs stonith \ create <name> fence_aws access_key=access-key secret_key=<secret-access-key> \ region=<region> pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4Example:
[root@ip-10-0-0-48 ~]# pcs stonith \ create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ \ region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4- To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Testing a fence device
4.8.2. Configuring fencing for a VPC cluster Copy linkLink copied to clipboard!
An alternate approach for configuring fencing for a virtual private cloud (VPC) cluster in a Red Hat Enterprise Linux (RHEL) high availability cluster on Amazon Web Services (AWS). Fencing isolates malfunctioned or unresponsive nodes to keep data integrity and cluster availability, using AWS resources and cluster management tools for automated node management.
Prerequisites
-
You have installed the
resource-agentspackage on nodes to enable thefence_awsfencing agent in the cluster. - You have set up your AWS Access Key and AWS Secret Access Key. See Creating the AWS Access Key and AWS Secret Access Key for more information.
Procedure
Obtain the VPC ID of the cluster.
$ aws ec2 describe-vpcs --output text --filters "Name=tag:Name,Values=<clustername>-vpc" --query 'Vpcs[*].VpcId' vpc-06bc10ac8f6006664By using the VPC ID of the cluster, obtain the VPC instances.
$ aws ec2 describe-instances --output text --filters "Name=vpc-id,Values=vpc-06bc10ac8f6006664" --query 'Reservations[*].Instances[*].{Name:Tags[?Key==Name]|[0].Value,Instance:InstanceId}' | grep "\-node[a-c]" i-0b02af8927a895137 <clustername>-nodea-vm i-0cceb4ba8ab743b69 <clustername>-nodeb-vm i-0502291ab38c762a5 <clustername>-nodec-vmUse the obtained instance IDs to configure fencing on each node on the cluster. For example, to configure a fencing device on all nodes in a cluster:
[root@nodea ~]# CLUSTER=<clustername> && pcs stonith create fence${CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=$(for NODE \ in node{a..c}; do ssh ${NODE} "echo -n \${HOSTNAME}:\$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\;"; done) \ pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXFor information about specific parameters for creating fencing devices, see the
fence_awsman page or the Configuring and managing high availability clusters guide.- To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device.
Verification
Display the configured fencing devices and their parameters on your nodes:
[root@nodea ~]# pcs stonith config fence${CLUSTER} Resource: <clustername> (class=stonith type=fence_aws) Attributes: access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=nodea:i-0b02af8927a895137;nodeb:i-0cceb4ba8ab743b69;nodec:i-0502291ab38c762a5; pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Operations: monitor interval=60s (<clustername>-monitor-interval-60s)Test the fencing agent for one of the cluster nodes.
# pcs stonith fence <awsnodename>NoteThe command response might take several minutes to display. If you check the active terminal session for the fencing node, you might see the connection to the terminal drop immediately after you enter the fence command.
Example:
[root@ip-10-0-0-48 ~]# pcs stonith fence ip-10-0-0-58 Node: ip-10-0-0-58 fencedCheck the status of the fenced node:
# pcs statusExample:
[root@ip-10-0-0-48 ~]# pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 19:55:41 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ] OFFLINE: [ ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabledStart the fenced node from the earlier step:
# pcs cluster start <awshostname>Check the status to verify the node started.
# pcs statusExample:
[root@ip-10-0-0-48 ~]# pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 20:01:31 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
4.9. Installing the AWS CLI on cluster nodes Copy linkLink copied to clipboard!
Earlier, you installed the AWS CLI on your host system. You need to install the AWS CLI on cluster nodes to configure the network resource agents. The following steps are applicable to each node in the cluster.
Prerequisites
- You have a Red Hat Customer Portal account.
- You have created an AWS account and set up AWS resources. See Setting Up with Amazon EC2 for more information.
- You have created AWS Access Key and Secret Key.
- You have set up the AWS CLI. For details, see Installing the AWS CLI.
Procedure
Verify that the AWS CLI is configured correctly where the instance IDs and instance names should be displayed:
Example:
[root@ip-10-0-0-48 ~]# aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==Name].Value]' i-07f1ac63af0ec0ac6 ip-10-0-0-48 i-063fc5fe93b4167b2 ip-10-0-0-46 i-08bd39eb03a6fd2c7 ip-10-0-0-58
4.10. Setting up IP address resources on AWS Copy linkLink copied to clipboard!
To ensure that clients that use IP addresses to access resources managed by the cluster over the network can access the resources if a failover occurs, the cluster must include IP address resources, which use specific network resource agents.
The RHEL HA Add-On provides a set of resource agents, which create IP address resources to manage various types of IP addresses on AWS. To decide which resource agent to configure, consider the type of AWS IP addresses that you want the HA cluster to manage:
-
To manage an IP address exposed to the internet, use the
awseipnetwork resource. -
To manage a private IP address limited to a single AWS Availability Zone (AZ), use the
awsvipandIPaddr2network resources. -
To manage an IP address that can move across multiple AWS AZs within the same AWS region, use the
aws-vpc-move-ipnetwork resource.
If the HA cluster does not manage any IP addresses, the resource agents for managing virtual IP addresses on AWS are not required. If you need further guidance for your specific deployment, consult with your AWS provider.
4.10.1. Creating an IP address resource to manage an IP address exposed to the internet Copy linkLink copied to clipboard!
Configure an Amazon Web Services (AWS) Secondary Elastic IP Address (awseip) resource. Use an elastic IP address for public-facing internet connections on Red Hat Enterprise Linux (RHEL) High Availability (HA) cluster nodes.
Prerequisites
- You have a Red Hat Customer Portal account.
- You have created an AWS account and set up AWS resources. See Setting Up with Amazon EC2 for more information.
- Your cluster nodes have access to the RHEL HA repositories. For details, see Installing the High Availability packages and agents.
- You have configured a cluster.
- You have set up the AWS CLI.
Procedure
Install the
resource-agentspackage.# yum install resource-agentsUsing the AWS command-line interface (CLI), create an elastic IP address.
[root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text eipalloc-4c4a2c45 vpc 35.169.153.122Optional: Display the description of
awseip. This shows the options and default operations for this agent.# pcs resource describe awseipCreate a resource group which has the secondary elastic IP address and allocated IP address that you earlier specified using the AWS CLI:
# pcs resource create <resource_id> awseip elastic_ip=<elastic_ip_address> allocation_id=<elastic_ip_association_id> --group <resource_group_name>
Verification
Display the status of the cluster to verify that the required resources are running.
# pcs statusThe following output shows an example running cluster where the
vipandelasticresources are part of thenetworking-groupresource group:[root@ip-10-0-0-58 ~]# pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Mon Mar 5 16:27:55 2018 Last change: Mon Mar 5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 4 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48 elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabledLaunch an SSH session from your local workstation to the elastic IP address that you created earlier:
$ ssh -l <user_name> -i ~/.ssh/<keyname>.pem <elastic_ip_address>Example:
$ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122
Verification
- Verify that the SSH connected host is the same host as the one associated with the elastic resource created.
4.10.2. Creating an IP address resource to manage a private IP address limited to a single AWS availability zone Copy linkLink copied to clipboard!
Configure an Amazon Web Services (AWS) secondary private IP address (awsvip) resource on a node of a Red Hat High Availability (HA) cluster. Use awsvip to limit the IP address to a single availability zone and HA clients.
You can connect and access HA clients to a Red Hat Enterprise Linux (RHEL) node that uses the private IP address.
Prerequisites
- You have a Red Hat Customer Portal account.
- You have created an AWS account and set up AWS resources. See Setting Up with Amazon EC2 for more information.
- You have configured a cluster.
- Your cluster nodes have access to the RHEL HA repositories. For details, see Installing the High Availability packages and agents.
- You have set up the AWS CLI. For instructions, see Installing the AWS CLI.
Procedure
Install the
resource-agentspackage.# yum install resource-agentsOptional: View the
awsvipdescription. This shows the options and default operations for this agent.# pcs resource describe awsvipCreate a secondary private IP address with an unused private IP address in the virtual private cloud (VPC) classless inter-domain routing (CIDR)
VPC CIDRblock. In addition, create a resource group for the secondary private IP address:# pcs resource create <example_resource_id> awsvip secondary_private_ip=<example_unused_private_IP_address> --group <example_group_name>Example:
[root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-groupCreate a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet. Ensure that the virtual IP belongs to the same resource group as the Secondary Private IP address you created in the earlier step:
# pcs resource create <example_resource_id> IPaddr2 ip=<example_secondary_private_IP> --group <example_group_name>Example:
root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-groupVerification
Display the status of the cluster to verify that the required resources are running.
# pcs statusThe following output shows an example running cluster where the
vipandprivipresources are active in a thenetworking-groupresource group:[root@ip-10-0-0-48 ~]# pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 22:34:24 2018 Last change: Fri Mar 2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 3 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48 vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
4.10.3. Creating an IP address resource to manage an IP address that can move across multiple AWS Availability Zones Copy linkLink copied to clipboard!
Configure an aws-vpc-move-ip resource to use an elastic IP address. You can use this resource to ensure high-availability (HA) clients on Amazon Web Services (AWS) can access a Red Hat Enterprise Linux (RHEL) node that can be moved across multiple AWS Availability Zones within the same AWS region.
Prerequisites
- You have a Red Hat Customer Portal account.
- You have created an AWS account and set up AWS resources. See Setting Up with Amazon EC2 for more information.
- You have configured a cluster.
- Your cluster nodes have access to the RHEL HA repositories. For more information, see Installing the High Availability packages and agents.
- You have set up the AWS CLI. For instructions, see Installing the AWS CLI.
An Identity and Access Management (IAM) user is configured on your cluster and has the following permissions:
- Modify routing tables
- Create security groups
- Create IAM policies and roles
Procedure
Install the
resource-agentspackage.# yum install resource-agentsOptional: View the
aws-vpc-move-ipdescription. This shows the options and default operations for this agent.# pcs resource describe aws-vpc-move-ipSet up an
OverlayIPAgentIAM policy for the IAM user.-
In the AWS console, navigate to Services → IAM → Policies → Create
OverlayIPAgentPolicy Input the following configuration, and change the <region>, <account-id>, and <ClusterRouteTableID> values to correspond with your cluster.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1424870324000", "Effect": "Allow", "Action": "ec2:DescribeRouteTables", "Resource": "*" }, { "Sid": "Stmt1424860166260", "Action": [ "ec2:CreateRoute", "ec2:ReplaceRoute" ], "Effect": "Allow", "Resource": "arn:aws:ec2:<region>:<account-id>:route-table/<ClusterRouteTableID>" } ] }
-
In the AWS console, navigate to Services → IAM → Policies → Create
In the AWS console, disable the
Source/Destination Checkfunction on all nodes in the cluster.To do this, right-click each node → Networking → Change Source/Destination Checks. In the pop-up message that appears, click Yes, Disable.
Create a route table for the cluster. To do so, use the following command on one node in the cluster:
# aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>In the command, replace values as follows:
-
ClusterRouteTableID: The route table ID for the existing cluster VPC route table. -
NewCIDRblockIP/NetMask: A new IP address and netmask outside of the VPC classless inter-domain routing (CIDR) block. For example, if the VPC CIDR block is172.31.0.0/16, the new IP address/netmask can be192.168.0.15/32. -
ClusterNodeID: The instance ID for another node in the cluster.
-
On one of the nodes in the cluster, create a
aws-vpc-move-ipresource that uses a free IP address that is accessible to the client. The following example creates a resource namedvpcipthat uses IP192.168.0.15.# pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>On all nodes in the cluster, edit the
/etc/hosts/file, and add a line with the IP address of the newly created resource. For example:192.168.0.15 vpcip
Verification
Test the failover ability of the new
aws-vpc-move-ipresource:# pcs resource move vpcipIf the failover succeeded, remove the automatically created constraint after the move of the
vpcipresource:# pcs resource clear vpcip