Chapter 4. Deploying Red Hat CloudForms


4.1. Deploying Red Hat CloudForms on OpenShift Container Platform

4.1.1. Introduction

The OpenShift Container Platform installer includes the Ansible role openshift-management and playbooks for deploying Red Hat CloudForms 4.6 (CloudForms Management Engine 5.9, or CFME) on OpenShift Container Platform.

Warning

The current implementation is incompatible with the Technology Preview deployment process of Red Hat CloudForms 4.5 as described in OpenShift Container Platform 3.6 documentation.

When deploying Red Hat CloudForms on OpenShift Container Platform, there are two major decisions to make:

  1. Do you want an external or a containerized (also referred to as podified) PostgreSQL database?
  2. Which storage class will back your persistent volumes (PVs)?

For the first decision, you can deploy Red Hat CloudForms in one of two ways, depending on the location of the PostgreSQL database to be used by Red Hat CloudForms:

Deployment VariantDescription

Fully containerized

All application services and the PostgreSQL database are run as pods on OpenShift Container Platform.

External database

The application utilizes an externally-hosted PostgreSQL database server, while all other services are ran as pods on OpenShift Container Platform.

For the second decision, the openshift-management role provides customization options for overriding many default deployment parameters. This includes the following storage class options to back your PVs:

Storage ClassDescription

NFS (default)

Local, on cluster

NFS External

NFS somewhere else, like a storage appliance

Cloud Provider

Use automatic storage provisioning from your cloud provider (Google Cloud Engine, Amazon Web Services, or Microsoft Azure)

Preconfigured (advanced)

Assumes you created everything ahead of time

Topics in this guide include the requirements for running Red Hat CloudForms on OpenShift Container Platform, descriptions of the available configuration variables, and instructions on running the installer either during your initial OpenShift Container Platform installation or after your cluster has been provisioned.

4.2. Requirements for Red Hat CloudForms on OpenShift Container Platform

 
The default requirements are listed in the table below. These can be overridden by customizing template parameters.

Important

The application performance will suffer, or possibly even fail to deploy, if these requirements are not satisfied.

Table 4.1. Default Requirements
ItemRequirementDescriptionCustomization Parameter

Application Memory

≥ 4.0 Gi

Minimum required memory for the application

APPLICATION_MEM_REQ

Application Storage

≥ 5.0 Gi

Minimum PV size required for the application

APPLICATION_VOLUME_CAPACITY

PostgreSQL Memory

≥ 6.0 Gi

Minimum required memory for the database

POSTGRESQL_MEM_REQ

PostgreSQL Storage

≥ 15.0 Gi

Minimum PV size required for the database

DATABASE_VOLUME_CAPACITY

Cluster Hosts

≥ 3

Number of hosts in your cluster

N/A

To sum up these requirements:

  • You must have several cluster nodes.
  • Your cluster nodes must have lots of memory available.
  • You must have several GiB’s of storage available, either locally or on your cloud provider.
  • PV sizes can be changed by providing override values to template parameters.

4.3. Configuring Role Variables

4.3.1. Overview

The following sections describe role variables that may be used in your Ansible inventory file, which is used to control the behavior of the Red Hat CloudForms installation when running the installer.

4.3.2. General Variables

VariableRequiredDefaultDescription

openshift_management_install_management

No

false

Boolean, set to true to install the application.

openshift_management_app_template

Yes

cfme-template

The deployment variant of Red Hat CloudForms to install. Set cfme-template for a containerized database or cfme-template-ext-db for an external database.

openshift_management_project

No

openshift-management

Namespace (project) for the Red Hat CloudForms installation.

openshift_management_project_description

No

CloudForms Management Engine

Namespace (project) description.

openshift_management_username

No

admin

Default management user name. Changing this value does not change the user name; only change this value if you have changed the name already and are running integration scripts (such as the script to add container providers).

openshift_management_password

No

smartvm

Default management password. Changing this value does not change the password; only change this value if you have changed the password already and are running integration scripts (such as the script to add container providers).

4.3.3. Customizing Template Parameters

You can use the openshift_management_template_parameters Ansible role variable to specify any template parameters you want to override in the application or PV templates.

For example, if you wanted to reduce the memory requirement of the PostgreSQL pod, then you could set the following:

openshift_management_template_parameters={'POSTGRESQL_MEM_REQ': '1Gi'}

When the Red Hat CloudForms template is processed, 1Gi will be used for the value of the POSTGRESQL_MEM_REQ template parameter.

Not all template parameters are present in both template variants (containerized or external database). For example, while the podified database template has a POSTGRESQL_MEM_REQ parameter, no such parameter is present in the external db template, as there is no need for this information due to there being no databases that require pods.

Therefore, be very careful if you are overriding template parameters. Including parameters not defined in a template will cause errors. If you do receive an error during the Ensure the Management App is created task, run the uninstall scripts first before running the installer again.

4.3.4. Database Variables

4.3.4.1. Containerized (Podified) Database

Any POSTGRES_* or DATABASE_* template parameters in the cfme-template.yaml file may be customized through the openshift_management_template_parameters hash in your inventory file..

4.3.4.2. External Database

Any POSTGRES_* or DATABASE_* template parameters in the cfme-template-ext-db.yaml file may be customized through the openshift_management_template_parameters hash in your inventory file..

External PostgreSQL databases require you to provide database connection parameters. You must set the required connection keys in the openshift_management_template_parameters parameter in your inventory. The following keys are required:

  • DATABASE_USER
  • DATABASE_PASSWORD
  • DATABASE_IP
  • DATABASE_PORT (Most PostgreSQL servers run on port 5432)
  • DATABASE_NAME
Note

Ensure your external database is running PostgreSQL 9.5 or you may not be able to deploy the CloudForms application successfully.

Your inventory would contain a line similar to:

[OSEv3:vars]
openshift_management_app_template=cfme-template-ext-db 1
openshift_management_template_parameters={'DATABASE_USER': 'root', 'DATABASE_PASSWORD': 'mypassword', 'DATABASE_IP': '10.10.10.10', 'DATABASE_PORT': '5432', 'DATABASE_NAME': 'cfme'}
1
Set openshift_management_app_template parameter to cfme-template-ext-db.

4.3.5. Storage Class Variables

VariableRequiredDefaultDescription

openshift_management_storage_class

No

nfs

Storage type to use. Options are nfs, nfs_external, preconfigured, or cloudprovider.

openshift_management_storage_nfs_external_hostname

No

false

If you are using an external NFS server, such as a NetApp appliance, then you must set the host name here. Leave the value as false if you are not using external NFS. Additionally, external NFS requires that you create the NFS exports that will back the application PV and optionally the database PV.

openshift_management_storage_nfs_base_dir

No

/exports/

If you are using external NFS, then you can set the base path to the exports location here. For local NFS, you can also change this value if you want to change the default path used for local NFS exports.

openshift_management_storage_nfs_local_hostname

No

false

If you do not have an [nfs] group in your inventory, or want to simply manually define the local NFS host in your cluster, set this parameter to the host name of the preferred NFS server. The server must be a part of your OpenShift Container Platform cluster.

4.3.5.1. NFS (Default)

The NFS storage class is best suited for proof-of-concept and test deployments. It is also the default storage class for deployments. No additional configuration is required for this choice.

This storage class configures NFS on a cluster host (by default, the first master in the inventory file) to back the required PVs. The application requires a PV, and the database (which may be hosted externally) may require a second. PV minimum required sizes are 5GiB for the Red Hat CloudForms application, and 15GiB for the PostgreSQL database (20GiB minimum available space on a volume or partition if used specifically for NFS purposes).

Customization is provided through the following role variables:

  • openshift_management_storage_nfs_base_dir
  • openshift_management_storage_nfs_local_hostname

4.3.5.2. NFS External

External NFS leans on pre-configured NFS servers to provide exports for the required PVs. For external NFS you must have a cfme-app and optionally a cfme-db (for containerized database) exports.

Configuration is provided through the following role variables:

  • openshift_management_storage_nfs_external_hostname
  • openshift_management_storage_nfs_base_dir

The openshift_management_storage_nfs_external_hostname parameter must be set to the host name or IP of your external NFS server.

If /exports is not the parent directory to your exports then you must set the base directory via the openshift_management_storage_nfs_base_dir parameter.

For example, if your server export is /exports/hosted/prod/cfme-app, then you must set openshift_management_storage_nfs_base_dir=/exports/hosted/prod.

4.3.5.3. Cloud Provider

If you are using OpenShift Container Platform cloud provider integration for your storage class, Red Hat CloudForms can also use the cloud provider storage to back its required PVs. For this functionality to work, you must have configured the openshift_cloudprovider_kind variable (for AWS or GCE) and all associated parameters specific to your chosen cloud provider.

When the application is created using this storage class, the required PVs are automatically provisioned using the configured cloud provider storage integration.

There are no additional variables to configure the behavior of this storage class.

4.3.5.4. Preconfigured (Advanced)

The preconfigured storage class implies that you know exactly what you are doing and that all storage requirements have been taken care ahead of time. Typically this means that you have already created the correctly sized PVs. The installer will do nothing to modify any storage settings.

There are no additional variables to configure the behavior of this storage class.

4.4. Running the Installer

4.4.1. Deploying Red Hat CloudForms During or After OpenShift Container Platform Installation

You can choose to deploy Red Hat CloudForms either during initial OpenShift Container Platform installation or after the cluster has been provisioned:

  1. Ensure that openshift_management_install_management is set to true in your inventory file under the [OSEv3:vars] section:

    [OSEv3:vars]
    openshift_management_install_management=true
  2. Set any other Red Hat CloudForms role variables in your inventory file as described in Configuring Role Variables. Resources to assist in this are provided in Example Inventory Files.
  3. Choose which playbook to run depending on whether OpenShift Container Platform is already provisioned:

    1. If you want to install Red Hat CloudForms at the same time you install your OpenShift Container Platform cluster, call the standard config.yml playbook as described in Running the Installation Playbooks to begin the OpenShift Container Platform cluster and Red Hat CloudForms installation.
    2. If you want to install Red Hat CloudForms on an already provisioned OpenShift Container Platform cluster, change to the playbook directory and call the Red Hat CloudForms playbook directly to begin the installation:

      $ cd /usr/share/ansible/openshift-ansible
      $ ansible-playbook -v [-i /path/to/inventory] \
          playbooks/openshift-management/config.yml

4.4.2. Example Inventory Files

The following sections show example snippets of inventory files showing various configurations of Red Hat CloudForms on OpenShift Container Platform that can help you get started.

Note

See Configuring Role Variables for complete variable descriptions.

4.4.2.1. All Defaults

This example is the simplest, using all of the default values and choices. This results in a fully-containerized (podified) Red Hat CloudForms installation. All application components, as well as the PostgreSQL database, are created as pods in OpenShift Container Platform:

[OSEv3:vars]
openshift_management_app_template=cfme-template

4.4.2.2. External NFS Storage

This is as the previous example, except that instead of using local NFS services in the cluster, it uses an existing, external NFS server (such as a storage appliance). Note the two new parameters:

[OSEv3:vars]
openshift_management_app_template=cfme-template
openshift_management_storage_class=nfs_external 1
openshift_management_storage_nfs_external_hostname=nfs.example.com 2
1
Set to nfs_external.
2
Set to the host name of the NFS server.

If the external NFS host exports directories under a different parent directory, such as /exports/hosted/prod, add the following additional variable:

openshift_management_storage_nfs_base_dir=/exports/hosted/prod

4.4.2.3. Override PV Sizes

This example overrides the persistent volume (PV) sizes. PV sizes must be set via openshift_management_template_parameters, which ensures that the application and database are able to make claims on created PVs without interfering with each other:

[OSEv3:vars]
openshift_management_app_template=cfme-template
openshift_management_template_parameters={'APPLICATION_VOLUME_CAPACITY': '10Gi', 'DATABASE_VOLUME_CAPACITY': '25Gi'}

4.4.2.4. Override Memory Requirements

In a test or proof-of-concept installation, you may need to reduce the application and database memory requirements to fit within your capacity. Note that reducing memory limits can result in reduced performance or a complete failure to initialize the application:

[OSEv3:vars]
openshift_management_app_template=cfme-template
openshift_management_template_parameters={'APPLICATION_MEM_REQ': '3000Mi', 'POSTGRESQL_MEM_REQ': '1Gi', 'ANSIBLE_MEM_REQ': '512Mi'}

This example instructs the installer to process the application template with the parameter APPLICATION_MEM_REQ set to 3000Mi, POSTGRESQL_MEM_REQ set to 1Gi, and ANSIBLE_MEM_REQ set to 512Mi.

These parameters can be combined with the parameters displayed in the previous example Override PV Sizes.

4.4.2.5. External PostgreSQL Database

To use an external database, you must change the openshift_management_app_template parameter value to cfme-template-ext-db.

Additionally, database connection information must be supplied using the openshift_management_template_parameters variable. See Configuring Role Variables for more details.

[OSEv3:vars]
openshift_management_app_template=cfme-template-ext-db
openshift_management_template_parameters={'DATABASE_USER': 'root', 'DATABASE_PASSWORD': 'mypassword', 'DATABASE_IP': '10.10.10.10', 'DATABASE_PORT': '5432', 'DATABASE_NAME': 'cfme'}
Important

Ensure your are running PostgreSQL 9.5 or you may not be able to deploy the application successfully.

4.5. Enabling Container Provider Integration

4.5.1. Adding a Single Container Provider

After deploying Red Hat CloudForms on OpenShift Container Platform as described in Running the Installer, there are two methods for enabling container provider integration. You can manually add OpenShift Container Platform as a container provider, or you can try the playbooks included with this role.

4.5.1.1. Adding Manually

See the following Red Hat CloudForms documentation for steps on manually adding your OpenShift Container Platform cluster as a container provider:

4.5.1.2. Adding Automatically

Automated container provider integration can be accomplished using the playbooks included with this role.

This playbook:

  1. Gathers the necessary authentication secrets.
  2. Finds the public routes to the Red Hat CloudForms application and the cluster API.
  3. Makes a REST call to add the OpenShift Container Platform cluster as a container provider.

Change to the playbook directory and run the container provider playbook:

$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook -v [-i /path/to/inventory] \
    openshift-management/add_container_provider.yml

4.5.2. Multiple Container Providers

As well as providing playbooks to integrate your current OpenShift Container Platform cluster into your Red Hat CloudForms deployment, this role includes a script which allows you to add multiple container platforms as container providers in any arbitrary Red Hat CloudForms server. The container platforms can be OpenShift Container Platform or OpenShift Origin.

Using the multiple provider script requires manual configuration and setting an EXTRA_VARS parameter on the CLI when running the playbook.

4.5.2.1. Preparing the Script

To prepare the multiple provider script, complete the following manual configuration:

  1. Copy the /usr/share/ansible/openshift-ansible/roles/openshift_management/files/examples/container_providers.yml example somewhere, such as /tmp/cp.yml. You will be modifying this file.
  2. If you changed your Red Hat CloudForms name or password, update the hostname, user, and password parameters in the management_server key in the container_providers.yml file that you copied.
  3. Fill in an entry under the container_providers key for each container platform cluster you want to add as container providers.

    1. The following parameters must be configured:

      • auth_key - This is the token of a service account that has cluster-admin privileges.
      • hostname - This is the host name that points to the cluster API. Each container provider must have a unique host name.
      • name - This is the name of the cluster to be displayed in the Red Hat CloudForms server container providers overview page. This must be unique.
      Tip

      To obtain the auth_key bearer token from your clusters:

      $ oc serviceaccounts get-token -n management-infra management-admin
    2. The following parameters may be optionally configured:

      • port - Update this key if your container platform cluster runs the API on a port other than 8443.
      • endpoint - You may enable SSL verification (verify_ssl) or change the validation setting to ssl-with-validation. Support for custom trusted CA certificates is not currently available.
4.5.2.1.1. Example

As an example, consider the following scenario:

  • You copied the container_providers.yml file to /tmp/cp.yml.
  • You want to add two OpenShift Container Platform clusters.
  • Your Red Hat CloudForms server runs on mgmt.example.com

For this scenario, you would customize /tmp/cp.yml as follows:

container_providers:
  - connection_configurations:
      - authentication: {auth_key: "<token>", authtype: bearer, type: AuthToken} 1
        endpoint: {role: default, security_protocol: ssl-without-validation, verify_ssl: 0}
    hostname: "<provider_hostname1>"
    name: <display_name1>
    port: 8443
    type: "ManageIQ::Providers::Openshift::ContainerManager"
  - connection_configurations:
      - authentication: {auth_key: "<token>", authtype: bearer, type: AuthToken} 2
        endpoint: {role: default, security_protocol: ssl-without-validation, verify_ssl: 0}
    hostname: "<provider_hostname2>"
    name: <display_name2>
    port: 8443
    type: "ManageIQ::Providers::Openshift::ContainerManager"
management_server:
  hostname: "<hostname>"
  user: <user_name>
  password: <password>
1 2
Replace <token> with the management token for this cluster.

4.5.2.2. Running the Playbook

To run the multiple-providers integration script, you must provide the path to the container providers configuration file as an EXTRA_VARS parameter to the ansible-playbook command. Use the -e (or --extra-vars) parameter to set container_providers_config to the configuration file path. Change to the playbook directory and run the playbook:

$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook -v [-i /path/to/inventory] \
    -e container_providers_config=/tmp/cp.yml \
    playbooks/openshift-management/add_many_container_providers.yml

After the playbook completes, you should find two new container providers in your Red Hat CloudForms service. Navigate to the Compute Containers Providers page to see an overview.

4.5.3. Refreshing Providers

After adding either a single or multiple container providers, the new provider(s) must be refreshed in Red Hat CloudForms to get all the latest data about the container provider and the containers being managed. This involves navigating to each provider in the Red Hat CloudForms web console and clicking a refresh button for each.

See the following Red Hat CloudForms documentation for steps:

4.6. Uninstalling Red Hat CloudForms

4.6.1. Running the Uninstall Playbook

To uninstall and erase a deployed Red Hat CloudForms installation from OpenShift Container Platform, change to the playbook directory and run the uninstall.yml playbook:

$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook -v [-i /path/to/inventory] \
    playbooks/openshift-management/uninstall.yml
Important

NFS export definitions and data stored on NFS exports are not automatically removed. You are urged to manually erase any data from old application or database deployments before attempting to initialize a new deployment.

4.6.2. Troubleshooting

Failure to erase old PostgreSQL data can result in cascading errors, causing the postgresql pod to enter a crashloopbackoff state. This blocks the cfme pod from ever starting. The cause of the crashloopbackoff is due to incorrect file permissions on the database NFS export created during a previous deployment.

To continue, erase all data from the PostgreSQL export and delete the pod (not the deployer pod). For example, if you had the following pods:

$ oc get pods
NAME                 READY     STATUS             RESTARTS   AGE
httpd-1-cx7fk        1/1       Running            1          21h
cfme-0               0/1       Running            1          21h
memcached-1-vkc7p    1/1       Running            1          21h
postgresql-1-deploy  1/1       Running            1          21h
postgresql-1-6w2t4   0/1       CrashLoopBackOff   1          21h

Then you would:

  1. Erase the data from the database NFS export.
  2. Run:

    $ oc delete postgresql-1-6w2t4

The PostgreSQL deployer pod will try to scale up a new postgresql pod to replace the one you deleted. After the postgresql pod is running, the cfme pod will stop blocking and begin application initialization.

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.