Chapter 4. Configuring Your Inventory File


4.1. Customizing Inventory Files for Your Cluster

Ansible inventory files describe the details about the hosts in your cluster and the cluster configuration details for your OpenShift Container Platform installation. The OpenShift Container Platform installation playbooks read your inventory file to know where and how to install OpenShift Container Platform across your set of hosts.

Note

See Ansible documentation for details about the format of an inventory file, including basic details about YAML syntax.

When you install the openshift-ansible RPM package as described in Host preparation, Ansible dependencies create a file at the default location of /etc/ansible/hosts. However, the file is simply the default Ansible example and has no variables related specifically to OpenShift Container Platform configuration. To successfully install OpenShift Container Platform, you must replace the default contents of the file with your own configuration based on your cluster topography and requirements.

The following sections describe commonly-used variables to set in your inventory file during cluster installation. Many of the Ansible variables described are optional. For development environments, you can accept the default values for the required parameters, but you must select appropriate values for them in production environments.

You can review Example Inventory Files for various examples to use as a starting point for your cluster installation.

Note

Images require a version number policy in order to maintain updates. See the Image Version Tag Policy section in the Architecture Guide for more information.

4.2. Configuring Cluster Variables

To assign global cluster environment variables during the Ansible installation, add them to the [OSEv3:vars] section of the /etc/ansible/hosts file. You must place each parameter value on a on separate line. For example:

[OSEv3:vars]

openshift_master_identity_providers=[{'name': 'htpasswd_auth',
'login': 'true', 'challenge': 'true',
'kind': 'HTPasswdPasswordIdentityProvider',}]

openshift_master_default_subdomain=apps.test.example.com
Important

If a parameter value in the Ansible inventory file contains special characters, such as #, { or }, you must double-escape the value (that is enclose the value in both single and double quotation marks). For example, to use mypasswordwith###hashsigns as a value for the variable openshift_cloudprovider_openstack_password, declare it as openshift_cloudprovider_openstack_password='"mypasswordwith###hashsigns"' in the Ansible host inventory file.

The following tables describe global cluster variables for use with the Ansible installer:

Table 4.1. General Cluster Variables
VariablePurpose

ansible_ssh_user

This variable sets the SSH user for the installer to use and defaults to root. This user must allow SSH-based authentication without requiring a password. If using SSH key-based authentication, then the key must be managed by an SSH agent.

ansible_become

If ansible_ssh_user is not root, this variable must be set to true and the user must be configured for passwordless sudo.

debug_level

This variable sets which INFO messages are logged to the systemd-journald.service. Set one of the following:

  • 0 to log errors and warnings only
  • 2 to log normal information (This is the default level.)
  • 4 to log debugging-level information
  • 6 to log API-level debugging information (request / response)
  • 8 to log body-level API debugging information

For more information on debug log levels, see Configuring Logging Levels.

openshift_clock_enabled

Whether to enable Network Time Protocol (NTP) on cluster nodes. The default value is true.

If the chrony package is installed, it is configured to provide NTP service. If the chrony package is not installed, the installation playbooks install and configure the ntp package to provide NTP service.

Important

To prevent masters and nodes in the cluster from going out of sync, do not change the default value of this parameter.

openshift_master_admission_plugin_config

This variable sets the parameter and arbitrary JSON values as per the requirement in your inventory hosts file. For example:

openshift_master_admission_plugin_config={"ClusterResourceOverride":{"configuration":{"apiVersion":"v1","kind":"ClusterResourceOverrideConfig","memoryRequestToLimitPercent":"25","cpuRequestToLimitPercent":"25","limitCPUToMemoryPercent":"200"}}}

In this value, openshift_master_admission_plugin_config={"openshift.io/ImagePolicy":{"configuration":{"apiVersion":"v1","executionRules":[{"matchImageAnnotations":[{"key":"images.openshift.io/deny-execution","value":"true"}],"name":"execution-denied","onResources":[{"resource":"pods"},{"resource":"builds"}],"reject":true,"skipOnResolutionFailure":true}],"kind":"ImagePolicyConfig"}}} is the default parameter value.

Important

You must include the default openshift_master_admission_plugin_config value even if you need to add a custom setting.

openshift_master_audit_config

This variable enables API service auditing. See Audit Configuration for more information.

openshift_master_audit_policyfile

Provide the location of a audit policy file. See Audit Policy Configuration for more information.

openshift_master_cluster_hostname

This variable overrides the host name for the cluster, which defaults to the host name of the master.

openshift_master_cluster_public_hostname

This variable overrides the public host name for the cluster, which defaults to the host name of the master. If you use an external load balancer, specify the address of the external load balancer.

For example:

openshift_master_cluster_public_hostname=openshift-ansible.public.example.com

openshift_master_cluster_method

Optional. This variable defines the HA method when deploying multiple masters. Supports the native method. See Multiple Masters for more information.

openshift_rolling_restart_mode

This variable enables rolling restarts of HA masters (i.e., masters are taken down one at a time) when running the upgrade playbook directly. It defaults to services, which allows rolling restarts of services on the masters. It can instead be set to system, which enables rolling, full restarts of the master nodes.

A rolling restart of the masters can be necessary to apply additional changes using the supplied Ansible hooks during the upgrade. Depending on the tasks you choose to perform you might want to reboot the host to restart your services.

openshift_master_identity_providers

This variable sets the identity provider. The default value is Deny All. If you use a supported identity provider, configure OpenShift Container Platform to use it. You can configure multiple identity providers.

openshift_master_named_certificates

These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information.

openshift_master_overwrite_named_certificates

openshift_hosted_router_certificate

Provide the location of the custom certificates for the hosted router.

openshift_master_ca_certificate

Provide the single certificate and key that signs the OpenShift Container Platform certificates. See Redeploying a New or Custom OpenShift Container Platform CA

openshift_additional_ca

If the certificate for your openshift_master_ca_certificate parameter is signed by an intermediate certificate, provide the bundled certificate that contains the full chain of intermediate and root certificates for the CA. See Redeploying a New or Custom OpenShift Container Platform CA

openshift_redeploy_service_signer

If the parameter is set to false, the service signer is not redeployed when you run the openshift-master/redeploy-certificates.yml playbook. The default value is true.

openshift_hosted_registry_cert_expire_days

Validity of the auto-generated registry certificate in days. Defaults to 730 (2 years).

openshift_ca_cert_expire_days

Validity of the auto-generated CA certificate in days. Defaults to 1825 (5 years).

openshift_master_cert_expire_days

Validity of the auto-generated master certificate in days. Defaults to 730 (2 years).

etcd_ca_default_days

Validity of the auto-generated external etcd certificates in days. Controls validity for etcd CA, peer, server and client certificates. Defaults to 1825 (5 years).

openshift_certificate_expiry_warning_days

Halt upgrades to clusters that have certificates expiring in this many days or fewer. Defaults to 365 (1 year).

openshift_certificate_expiry_fail_on_warn

Whether upgrade fails if the auto-generated certificates are not valid for the period specified by the openshift_certificate_expiry_warning_days parameter. Defaults to True.

os_firewall_use_firewalld

Set to true to use firewalld instead of the default iptables. Not available on RHEL Atomic Host. See the Configuring the Firewall section for more information.

openshift_master_session_name

These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information.

openshift_master_session_max_seconds

openshift_master_session_auth_secrets

openshift_master_session_encryption_secrets

openshift_master_image_policy_config

Sets imagePolicyConfig in the master configuration. See Image Configuration for details.

openshift_router_selector

Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details.

openshift_registry_selector

Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details.

openshift_template_service_broker_namespaces

This variable enables the template service broker by specifying one or more namespaces whose templates will be served by the broker.

openshift_master_bootstrap_auto_approve

This variable enables TLS bootstrapping auto approval, which allows nodes to automatically join the cluster when provided a bootstrap credential. Set to true if you will enable the cluster auto-scaler on an Amazon Web Services (AWS) cluster. The default value is false.

ansible_service_broker_node_selector

Default node selector for automatically deploying Ansible service broker pods, defaults {"node-role.kubernetes.io/infra":"true"}. See Configuring Node Host Labels for details.

osm_default_node_selector

This variable overrides the node selector that projects will use by default when placing pods, which is defined by the projectConfig.defaultNodeSelector field in the master configuration file. This defaults to node-role.kubernetes.io/compute=true if undefined.

openshift_docker_additional_registries

OpenShift Container Platform adds the specified additional registry or registries to the docker configuration. These are the registries to search. If the registry requires access to a port other than 80, include the port number required in the form of <address>:<port>.

For example:

openshift_docker_additional_registries=example.com:443
Note

If you need to configure your cluster to use an alternate registry, set oreg_url rather than rely on openshift_docker_additional_registries.

openshift_docker_insecure_registries

OpenShift Container Platform adds the specified additional insecure registry or registries to the docker configuration. For any of these registries, secure sockets layer (SSL) is not verified. Can be set to the host name or IP address of the host. 0.0.0.0/0 is not a valid setting for the IP address.

openshift_docker_blocked_registries

OpenShift Container Platform adds the specified blocked registry or registries to the docker configuration. Block the listed registries. Setting this to all blocks everything not in the other variables.

openshift_docker_ent_reg

An additional registry that is trusted by the container runtime, when openshift_deployment_type is set to openshift-enterprise. The default is registry.redhat.io. If you set openshift_docker_ent_reg='', then registry.redhat.io will not be added to the docker configuration.

openshift_metrics_hawkular_hostname

This variable sets the host name for integration with the metrics console by overriding metricsPublicURL in the master configuration for cluster metrics. If you alter this variable, ensure the host name is accessible via your router.

openshift_clusterid

This variable is a cluster identifier unique to the AWS Availability Zone. Using this avoids potential issues in Amazon Web Services (AWS) with multiple zones or multiple clusters. See Labeling Clusters for AWS for details.

openshift_encryption_config

Use this variable to configure datastore-layer encryption.

openshift_image_tag

Use this variable to specify a container image tag to install or configure.

openshift_pkg_version

Use this variable to specify an RPM version to install or configure.

Warning

If you modify the openshift_image_tag or the openshift_pkg_version variables after the cluster is set up, then an upgrade can be triggered, resulting in downtime.

  • If openshift_image_tag is set, its value is used for all hosts in system container environments, even those that have another version installed. If
  • openshift_pkg_version is set, its value is used for all hosts in RPM-based environments, even those that have another version installed.
Table 4.2. Networking Variables
VariablePurpose

openshift_master_default_subdomain

This variable overrides the default subdomain to use for exposed routes. The value for this variable must consist of lower case alphanumeric characters or dashes (-). It must start with an alphabetic character, and end with an alphanumeric character.

os_sdn_network_plugin_name

This variable configures which OpenShift SDN plug-in to use for the pod network, which defaults to redhat/openshift-ovs-subnet for the standard SDN plug-in. Set the variable to redhat/openshift-ovs-multitenant to use the multitenant SDN plug-in.

osm_cluster_network_cidr

This variable overrides the SDN cluster network CIDR block. This is the network from which pod IPs are assigned. Specify a private block that does not conflict with existing network blocks in your infrastructure to which pods, nodes, or the master might require access. Defaults to 10.128.0.0/14 and cannot be arbitrarily re-configured after deployment, although certain changes to it can be made in the SDN master configuration.

openshift_portal_net

This variable configures the subnet in which services will be created in the OpenShift Container Platform SDN. Specify a private block that does not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master might require access to, or the installation will fail. Defaults to 172.30.0.0/16, and cannot be re-configured after deployment. If changing from the default, avoid 172.17.0.0/16, which the docker0 network bridge uses by default, or modify the docker0 network.

osm_host_subnet_length

This variable specifies the size of the per host subnet allocated for pod IPs by OpenShift Container Platform SDN. Defaults to 9 which means that a subnet of size /23 is allocated to each host; for example, given the default 10.128.0.0/14 cluster network, this will allocate 10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, and so on. This cannot be re-configured after deployment.

openshift_node_proxy_mode

This variable specifies the service proxy mode to use: either iptables for the default, pure-iptables implementation, or userspace for the user space proxy.

openshift_use_flannel

This variable enables flannel as an alternative networking layer instead of the default SDN. If enabling flannel, disable the default SDN with the openshift_use_openshift_sdn variable. For more information, see Using Flannel.

openshift_use_openshift_sdn

Set to false to disable the OpenShift SDN plug-in.

openshift_sdn_vxlan_port

This variable sets the vxlan port number for cluster network. Defaults to 4789. See Changing the VXLAN PORT for the cluster network for more information.

openshift_node_sdn_mtu

This variable specifies the MTU size to use for OpenShift SDN. The value must be 50 bytes less than the MTU of the primary network interface of the node. For example, if the primary network interface has an MTU of 1500, this value will be 1450. The default value is 1450.

4.3. Configuring Deployment Type

Various defaults used throughout the playbooks and roles used by the installer are based on the deployment type configuration (usually defined in an Ansible inventory file).

Ensure the openshift_deployment_type parameter in your inventory file’s [OSEv3:vars] section is set to openshift-enterprise to install the OpenShift Container Platform variant:

[OSEv3:vars]
openshift_deployment_type=openshift-enterprise

4.4. Configuring Host Variables

To assign environment variables to hosts during the Ansible installation, set them in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:

[masters]
ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com

The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:

Table 4.3. Host Variables
VariablePurpose

openshift_public_hostname

This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT).

openshift_public_ip

This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT).

openshift_node_labels

This variable is deprecated; see Defining Node Groups and Host Mappings for the current method of setting node labels.

openshift_docker_options

This variable configures additional docker options in /etc/sysconfig/docker, such as options used in Managing Container Logs. It is recommended to use json-file.

The following example shows the configuration of Docker to use the json-file log driver, where Docker rotates between three 1 MB log files and signature verification is not required. When supplying additional options, ensure that you maintain the single quotation mark formatting:

OPTIONS=' --selinux-enabled --log-opt  max-size=1M --log-opt max-file=3 --insecure-registry 172.30.0.0/16 --log-driver=json-file --signature-verification=false'

openshift_schedulable

This variable configures whether the host is marked as a schedulable node, meaning that it is available for placement of new pods. See Configuring Schedulability on Masters.

openshift_node_problem_detector_install

This variable is used to activate the Node Problem Detector. If set to false, the default, the Node Problem Detector is not installed or started.

4.5. Defining Node Groups and Host Mappings

Node configurations are bootstrapped from the master. When the node boots and services are started, the node checks if a kubeconfig and other node configuration files exist before joining the cluster. If they do not, the node pulls the configuration from the master, then joins the cluster.

This process replaces administrators having to manually maintain the node configuration uniquely on each node host. Instead, the contents of a node host’s /etc/origin/node/node-config.yaml file are now provided by ConfigMaps sourced from the master.

4.5.1. Node ConfigMaps

The Configmaps for defining the node configurations must be available in the openshift-node project. ConfigMaps are also now the authoritative definition for node labels; the old openshift_node_labels value is effectively ignored.

By default during a cluster installation, the installer creates the following default ConfigMaps:

  • node-config-master
  • node-config-infra
  • node-config-compute

The following ConfigMaps are also created, which label nodes into multiple roles:

  • node-config-all-in-one
  • node-config-master-infra

The following ConfigMaps are CRI-O variants for each of the existing default node groups:

  • node-config-master-crio
  • node-config-infra-crio
  • node-config-compute-crio
  • node-config-all-in-one-crio
  • node-config-master-infra-crio
Important

You must not modify a node host’s /etc/origin/node/node-config.yaml file. Any changes are overwritten by the configuration that is defined in the ConfigMap the node uses.

4.5.2. Node Group Definitions

After installing the latest openshift-ansible package, you can view the default set of node group definitions in YAML format in the /usr/share/ansible/openshift-ansible/roles/openshift_facts/defaults/main.yml file:

openshift_node_groups:
  - name: node-config-master 1
    labels:
      - 'node-role.kubernetes.io/master=true' 2
    edits: [] 3
  - name: node-config-infra
    labels:
      - 'node-role.kubernetes.io/infra=true'
    edits: []
  - name: node-config-compute
    labels:
      - 'node-role.kubernetes.io/compute=true'
    edits: []
  - name: node-config-master-infra
    labels:
      - 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true'
    edits: []
  - name: node-config-all-in-one
    labels:
      - 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true,node-role.kubernetes.io/compute=true'
    edits: []
1
Node group name.
2
List of node labels associated with the node group. See Node Host Labels for details.
3
Any edits to the node group’s configuration.

If you do not set the openshift_node_groups variable in your inventory file’s [OSEv3:vars] group, these defaults values are used. However, if you want to set custom node groups, you must define the entire openshift_node_groups structure, including all planned node groups, in your inventory file.

The openshift_node_groups value is not merged with the default values, and you must translate the YAML definitions into a Python dictionary. You can then use the edits field to modify any node configuration variables by specifying key-value pairs.

Note

See Master and Node Configuration Files for reference on configurable node variables.

For example, the following entry in an inventory file defines groups named node-config-master, node-config-infra, and node-config-compute.

openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true']}]

You can also define new node group names with other labels, the following entry in an inventory file defines groups named node-config-master, node-config-infra, node-config-compute and node-config-compute-storage.

openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true']}, {'name': 'node-config-compute-storage', 'labels': ['node-role.kubernetes.io/compute-storage=true']}]
  • You can use a list to modify multiple key value pairs, such as modifying the node-config-compute group to add two parameters to the kubelet:
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true'], 'edits': [{ 'key': 'kubeletArguments.experimental-allocatable-ignore-eviction','value': ['true']}, {'key': 'kubeletArguments.eviction-hard', 'value': ['memory.available<1Ki']}]}]
  • You can use also use a dictionary as value, such as modifying the node-config-compute group to set perFSGroup to 512Mi:
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true'], 'edits': [{ 'key': 'volumeConfig.localQuota','value': {'perFSGroup':'512Mi'}}]}]

Whenever the openshift_node_group.yml playbook is run, the changes defined in the edits field will update the related ConfigMap (node-config-compute in this example), which will ultimately affect the node’s configuration file on the host.

Note

Running the openshift_node_group.yaml playbook only updates new nodes. It cannot be run to update existing nodes in a cluster.

4.5.3. Mapping Hosts to Node Groups

To map which ConfigMap to use for which node host, all hosts defined in the [nodes] group of your inventory must be assigned to a node group using the openshift_node_group_name variable.

Important

Setting openshift_node_group_name per host to a node group is required for all cluster installations whether you use the default node group definitions and ConfigMaps or are customizing your own.

The value of openshift_node_group_name is used to select the ConfigMap that configures each node. For example:

[nodes]
master[1:3].example.com openshift_node_group_name='node-config-master'
infra-node1.example.com openshift_node_group_name='node-config-infra'
infra-node2.example.com openshift_node_group_name='node-config-infra'
node1.example.com openshift_node_group_name='node-config-compute'
node2.example.com openshift_node_group_name='node-config-compute'

If other custom ConfigMaps have been defined in openshift_node_groups they can also be used. For exmaple:

[nodes]
master[1:3].example.com openshift_node_group_name='node-config-master'
infra-node1.example.com openshift_node_group_name='node-config-infra'
infra-node2.example.com openshift_node_group_name='node-config-infra'
node1.example.com openshift_node_group_name='node-config-compute'
node2.example.com openshift_node_group_name='node-config-compute'
gluster[1:6].example.com openshift_node_group_name='node-config-compute-storage'

4.5.4. Node Host Labels

You can assign Labels to node hosts during cluster installation. You can use these labels to determine the placement of pods onto nodes using the scheduler.

You must create your own custom node groups if you want to modify the default labels that are assigned to node hosts. You can no longer set the openshift_node_labels variable to change labels. See Node Group Definitions to modify the default node groups.

Other than node-role.kubernetes.io/infra=true (hosts using this group are also referred to as dedicated infrastructure nodes and discussed further in Configuring Dedicated Infrastructure Nodes), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster’s requirements.

4.5.4.1. Pod Schedulability on Masters

Configure all hosts that you designate as masters during the installation process as nodes. By doing so, the masters are configured as part of the OpenShift SDN. You must add entries for the master hosts to the [nodes] section:

[nodes]
master[1:3].example.com openshift_node_group_name='node-config-master'

If you want to change the schedulability of a host post-installation, see Marking Nodes as Unschedulable or Schedulable.

4.5.4.2. Pod Schedulability on Nodes

Masters are marked as schedulable nodes by default, so the default node selector is set by default during cluster installations. The default node selector is defined in the master configuration file’s projectConfig.defaultNodeSelector field to determine which node projects will use by default when placing pods. It is set to node-role.kubernetes.io/compute=true unless overridden using the osm_default_node_selector variable.

Important

If you accept the default node selector of node-role.kubernetes.io/compute=true during installation, ensure that you do not only have dedicated infrastructure nodes as the non-master nodes defined in your cluster. In that scenario, application pods fail to deploy because no nodes with the node-role.kubernetes.io/compute=true label are available to match the default node selector when scheduling pods for projects.

See Setting the Cluster-wide Default Node Selector for steps on adjusting this setting post-installation if needed.

4.5.4.3. Configuring Dedicated Infrastructure Nodes

It is recommended for production environments that you maintain dedicated infrastructure nodes where the registry and router pods can run separately from pods used for user applications.

The openshift_router_selector and openshift_registry_selector Ansible settings determine the label selectors used when placing registry and router pods. They are set to node-role.kubernetes.io/infra=true by default:

# default selectors for router and registry services
# openshift_router_selector='node-role.kubernetes.io/infra=true'
# openshift_registry_selector='node-role.kubernetes.io/infra=true'

The registry and router are only able to run on node hosts with the node-role.kubernetes.io/infra=true label, which are then considered dedicated infrastructure nodes. Ensure that at least one node host in your OpenShift Container Platform environment has the node-role.kubernetes.io/infra=true label; you can use the default node-config-infra, which sets this label:

[nodes]
infra-node1.example.com openshift_node_group_name='node-config-infra'
Important

If there is not a node in the [nodes] section that matches the selector settings, the default router and registry will be deployed as failed with Pending status.

If you do not intend to use OpenShift Container Platform to manage the registry and router, configure the following Ansible settings:

openshift_hosted_manage_registry=false
openshift_hosted_manage_router=false

If you use an image registry other than the default registry.redhat.io, you must specify the registry in the /etc/ansible/hosts file.

As described in Configuring Schedulability on Masters, master hosts are marked schedulable by default. If you label a master host with node-role.kubernetes.io/infra=true and have no other dedicated infrastructure nodes, the master hosts must also be marked as schedulable. Otherwise, the registry and router pods cannot be placed anywhere.

You can use the default node-config-master-infra node group to achieve this:

[nodes]
master.example.com openshift_node_group_name='node-config-master-infra'

4.6. Configuring Project Parameters

To configure the default project settings, configure the following variables in the /etc/ansible/hosts file:

Table 4.4. Project Parameters
ParameterDescriptionTypeDefault Value

osm_project_request_message

The string presented to a user if they are unable to request a project via the projectrequest API endpoint.

String

null

osm_project_request_template

The template to use for creating projects in response to a projectrequest. If you do not specify a value, the default template is used.

String with the format <namespace>/<template>

null

osm_mcs_allocator_range

Defines the range of MCS categories to assign to namespaces. If this value is changed after startup, new projects might receive labels that are already allocated to other projects. The prefix can be any valid SELinux set of terms, including user, role, and type. However, leaving the prefix at its default allows the server to set them automatically. For example, s0:/2 allocates labels from s0:c0,c0 to s0:c511,c511 whereas s0:/2,512 allocates labels from s0:c0,c0,c0 to s0:c511,c511,511.

String with the format <prefix>/<numberOfLabels>[,<maxCategory>]

s0:/2

osm_mcs_labels_per_project

Defines the number of labels to reserve per project.

Integer

5

osm_uid_allocator_range

Defines the total set of Unix user IDs (UIDs) automatically allocated to projects and the size of the block that each namespace gets. For example, 1000-1999/10 allocates ten UIDs per namespace and can allocate up to 100 blocks before running out of space. The default value is the expected size of the ranges for container images when user namespaces are started.

String in the format <block_range>/<number_of_UIDs>

1000000000-1999999999/10000

4.7. Configuring Master API Port

To configure the default ports used by the master API, configure the following variables in the /etc/ansible/hosts file:

Table 4.5. Master API Port
VariablePurpose

openshift_master_api_port

This variable sets the port number to access the OpenShift Container Platform API.

For example:

openshift_master_api_port=3443

The web console port setting (openshift_master_console_port) must match the API server port (openshift_master_api_port).

4.8. Configuring Cluster Pre-install Checks

Pre-install checks are a set of diagnostic tasks that run as part of the openshift_health_checker Ansible role. They run prior to an Ansible installation of OpenShift Container Platform, ensure that required inventory values are set, and identify potential issues on a host that can prevent or interfere with a successful installation.

The following table describes available pre-install checks that will run before every Ansible installation of OpenShift Container Platform:

Table 4.6. Pre-install Checks
Check NamePurpose

memory_availability

This check ensures that a host has the recommended amount of memory for the specific deployment of OpenShift Container Platform. Default values have been derived from the latest installation documentation. A user-defined value for minimum memory requirements might be set by setting the openshift_check_min_host_memory_gb cluster variable in your inventory file.

disk_availability

This check only runs on etcd, master, and node hosts. It ensures that the mount path for an OpenShift Container Platform installation has sufficient disk space remaining. Recommended disk values are taken from the latest installation documentation. A user-defined value for minimum disk space requirements might be set by setting openshift_check_min_host_disk_gb cluster variable in your inventory file.

docker_storage

Only runs on hosts that depend on the docker daemon (nodes and system container installations). Checks that docker's total usage does not exceed a user-defined limit. If no user-defined limit is set, docker's maximum usage threshold defaults to 90% of the total size available. The threshold limit for total percent usage can be set with a variable in your inventory file: max_thinpool_data_usage_percent=90. A user-defined limit for maximum thinpool usage might be set by setting the max_thinpool_data_usage_percent cluster variable in your inventory file.

docker_storage_driver

Ensures that the docker daemon is using a storage driver supported by OpenShift Container Platform. If the devicemapper storage driver is being used, the check additionally ensures that a loopback device is not being used. For more information, see Docker’s Use the Device Mapper Storage Driver guide.

docker_image_availability

Attempts to ensure that images required by an OpenShift Container Platform installation are available either locally or in at least one of the configured container image registries on the host machine.

package_version

Runs on yum-based systems determining if multiple releases of a required OpenShift Container Platform package are available. Having multiple releases of a package available during an enterprise installation of OpenShift suggests that there are multiple yum repositories enabled for different releases, which might lead to installation problems.

package_availability

Runs prior to RPM installations of OpenShift Container Platform. Ensures that RPM packages required for the current installation are available.

package_update

Checks whether a yum update or package installation will succeed, without actually performing it or running yum on the host.

To disable specific pre-install checks, include the variable openshift_disable_check with a comma-delimited list of check names in your inventory file. For example:

openshift_disable_check=memory_availability,disk_availability
Note

A similar set of health checks meant to run for diagnostics on existing clusters can be found in Ansible-based Health Checks. Another set of checks for checking certificate expiration can be found in Redeploying Certificates.

4.9. Configuring a Registry Location

If you use the default registry at registry.redhat.io, you must set the following variables:

oreg_url=registry.redhat.io/openshift3/ose-${component}:${version}
oreg_auth_user="<user>"
oreg_auth_password="<password>"

For more information about setting up the registry access token, see Red Hat Container Registry Authentication.

If you use an image registry other than the default at registry.redhat.io, specify the registry in the /etc/ansible/hosts file.

oreg_url=example.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams=true
Table 4.7. Registry Variables
VariablePurpose

oreg_url

Set to the alternate image location. Necessary if you are not using the default registry at registry.redhat.io. The default component inherits the image prefix and version from the oreg_url value. For the default registry, and registries that require authentication, you need to specify oreg_auth_user and oreg_auth_password.

openshift_examples_modify_imagestreams

Set to true if pointing to a registry other than the default. Modifies the image stream location to the value of oreg_url.

oreg_auth_user

If oreg_url points to a registry requiring authentication, use the oreg_auth_user variable to provide your user name. You must also provide your password as the oreg_auth_password parameter value. If you use the default registry, specify a user that can access registry.redhat.io.

oreg_auth_password

If oreg_url points to a registry requiring authentication, use the oreg_auth_password variable to provide your password. You must also provide your user name as the oreg_auth_user parameter value. If you use the default registry, specify the password or token for that user.

Note

The default registry requires an authentication token. For more information, see Accessing and Configuring the Red Hat Registry

For example:

oreg_url=example.com/openshift3/ose-${component}:${version}
oreg_auth_user=${user_name}
oreg_auth_password=${password}
openshift_examples_modify_imagestreams=true

4.10. Configuring a Registry Route

To allow users to push and pull images to the internal container image registry from outside of the OpenShift Container Platform cluster, configure the registry route in the /etc/ansible/hosts file. By default, the registry route is docker-registry-default.router.default.svc.cluster.local.

Table 4.8. Registry Route Variables
VariablePurpose

openshift_hosted_registry_routehost

Set to the value of the desired registry route. The route contains either a name that resolves to an infrastructure node where a router manages communication or the subdomain that you set as the default application subdomain wildcard value. For example, if you set the openshift_master_default_subdomain parameter to apps.example.com and .apps.example.com resolves to infrastructure nodes or a load balancer, you might use registry.apps.example.com as the registry route.

openshift_hosted_registry_routecertificates

Set the paths to the registry certificates. If you do not provide values for the certificate locations, certificates are generated. You can define locations for the following certificates:

  • certfile
  • keyfile
  • cafile

openshift_hosted_registry_routetermination

Set to one of the following values:

  • Set to reencrypt to terminate encryption at the edge router and re-encrypt it with a new certificate supplied by the destination.
  • Set to passthrough to terminate encryption at the destination. The destination is responsible for decrypting traffic.

For example:

openshift_hosted_registry_routehost=<path>
openshift_hosted_registry_routetermination=reencrypt
openshift_hosted_registry_routecertificates= "{'certfile': '<path>/org-cert.pem', 'keyfile': '<path>/org-privkey.pem', 'cafile': '<path>/org-chain.pem'}"

4.11. Configuring Router Sharding

Router sharding support is enabled by supplying the correct data to the inventory. The variable openshift_hosted_routers holds the data, which is in the form of a list. If no data is passed, then a default router is created. There are multiple combinations of router sharding. The following example supports routers on separate nodes:

openshift_hosted_routers=[{'name': 'router1', 'certificate': {'certfile': '/path/to/certificate/abc.crt',
'keyfile': '/path/to/certificate/abc.key', 'cafile':
'/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router',
'namespace': 'default', 'stats_port': 1936, 'edits': [], 'images':
'openshift3/ose-${component}:${version}', 'selector': 'type=router1', 'ports':
['80:80', '443:443']},

{'name': 'router2', 'certificate': {'certfile': '/path/to/certificate/xyz.crt',
'keyfile': '/path/to/certificate/xyz.key', 'cafile':
'/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router',
'namespace': 'default', 'stats_port': 1936, 'edits': [{'action': 'append',
'key': 'spec.template.spec.containers[0].env', 'value': {'name': 'ROUTE_LABELS',
'value': 'route=external'}}], 'images':
'openshift3/ose-${component}:${version}', 'selector': 'type=router2', 'ports':
['80:80', '443:443']}]

4.12. Configuring Red Hat Gluster Storage Persistent Storage

Red Hat Gluster Storage can be configured to provide persistent storage and dynamic provisioning for OpenShift Container Platform. It can be used both containerized within OpenShift Container Platform (converged mode) and non-containerized on its own nodes (independent mode).

You configure Red Hat Gluster Storage clusters using variables, which interact with the OpenShift Container Platform clusters. The variables, which you define in the [OSEv3:vars] group, include host variables, role variables, and image name and version tag variables.

You use the glusterfs_devices host variable to define the list of block devices to manage the Red Hat Gluster Storage cluster. Each host in your configuration must have at least one glusterfs_devices variable, and for every configuration, there must be at least one bare device with no partitions or LVM PVs.

Role variables control the integration of a Red Hat Gluster Storage cluster into a new or existing OpenShift Container Platform cluster. You can define a number of role variables, each of which also has a corresponding variable to optionally configure a separate Red Hat Gluster Storage cluster for use as storage for an integrated Docker registry.

You can define image name and version tag variables to prevent OpenShift Container Platform pods from upgrading after an outage, which could lead to a cluster with different OpenShift Container Platform versions. You can also define these variables to specify the image name and version tags for all containerized components.

Additional information and examples, including the ones below, can be found at Persistent Storage Using Red Hat Gluster Storage.

4.12.1. Configuring converged mode

Important

See converged mode Considerations for specific host preparations and prerequisites.

  1. In your inventory file, include the following variables in the [OSEv3:vars] section, and adjust them as required for your configuration:

    [OSEv3:vars]
    ...
    openshift_storage_glusterfs_namespace=app-storage
    openshift_storage_glusterfs_storageclass=true
    openshift_storage_glusterfs_storageclass_default=false
    openshift_storage_glusterfs_block_deploy=true
    openshift_storage_glusterfs_block_host_vol_size=100
    openshift_storage_glusterfs_block_storageclass=true
    openshift_storage_glusterfs_block_storageclass_default=false
  2. Add glusterfs in the [OSEv3:children] section to enable the [glusterfs] group:

    [OSEv3:children]
    masters
    nodes
    glusterfs
  3. Add a [glusterfs] section with entries for each storage node that will host the GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:

    <hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    [glusterfs]
    node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
  4. Add the hosts listed under [glusterfs] to the [nodes] group:

    [nodes]
    ...
    node11.example.com openshift_node_group_name="node-config-compute"
    node12.example.com openshift_node_group_name="node-config-compute"
    node13.example.com openshift_node_group_name="node-config-compute"

A valid image tag is required for your deployment to succeed. Replace <tag> with the version of Red Hat Gluster Storage that is compatible with OpenShift Container Platform 3.11 as described in the interoperability matrix for the following variables in your inventory file:

  • openshift_storage_glusterfs_image=registry.redhat.io/rhgs3/rhgs-server-rhel7:<tag>
  • openshift_storage_glusterfs_block_image=registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:<tag>
  • openshift_storage_glusterfs_s3_image=registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:<tag>
  • openshift_storage_glusterfs_heketi_image=registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:<tag>
  • openshift_storage_glusterfs_registry_image=registry.redhat.io/rhgs3/rhgs-server-rhel7:<tag>
  • openshift_storage_glusterfs_block_registry_image=registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:<tag>
  • openshift_storage_glusterfs_s3_registry_image=registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:<tag>
  • openshift_storage_glusterfs_heketi_registry_image=registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:<tag>

4.12.2. Configuring independent mode

  1. In your inventory file, include the following variables in the [OSEv3:vars] section, and adjust them as required for your configuration:

    [OSEv3:vars]
    ...
    openshift_storage_glusterfs_namespace=app-storage
    openshift_storage_glusterfs_storageclass=true
    openshift_storage_glusterfs_storageclass_default=false
    openshift_storage_glusterfs_block_deploy=true
    openshift_storage_glusterfs_block_host_vol_size=100
    openshift_storage_glusterfs_block_storageclass=true
    openshift_storage_glusterfs_block_storageclass_default=false
    openshift_storage_glusterfs_is_native=false
    openshift_storage_glusterfs_heketi_is_native=true
    openshift_storage_glusterfs_heketi_executor=ssh
    openshift_storage_glusterfs_heketi_ssh_port=22
    openshift_storage_glusterfs_heketi_ssh_user=root
    openshift_storage_glusterfs_heketi_ssh_sudo=false
    openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa"
  2. Add glusterfs in the [OSEv3:children] section to enable the [glusterfs] group:

    [OSEv3:children]
    masters
    nodes
    glusterfs
  3. Add a [glusterfs] section with entries for each storage node that will host the GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Also, set glusterfs_ip to the IP address of the node. Specifying the variable takes the form:

    <hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    [glusterfs]
    gluster1.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    gluster2.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    gluster3.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'

4.13. Configuring an OpenShift Container Registry

An integrated OpenShift Container Registry can be deployed using the installer.

4.13.1. Configuring Registry Storage

If no registry storage options are used, the default OpenShift Container Registry is ephemeral and all data will be lost when the pod no longer exists.

Important

Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.

There are several options for enabling registry storage when using the advanced installer:

Option A: NFS Host Group

When the following variables are set, an NFS volume is created during cluster installation with the path <nfs_directory>/<volume_name> on the host in the [nfs] host group. For example, the volume path using these options is be /exports/registry:

[OSEv3:vars]

# nfs_directory must conform to DNS-1123 subdomain must consist of lower case
# alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character

openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi
Option B: External NFS Host

To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host. The remote volume path using the following options is nfs.example.com:/exports/registry.

[OSEv3:vars]

# nfs_directory must conform to DNS-1123 subdomain must consist of lower case
# alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character

openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_host=nfs.example.com
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi
Upgrading or Installing OpenShift Container Platform with NFS
Option C: OpenStack Platform

An OpenStack storage configuration must already exist.

[OSEv3:vars]

openshift_hosted_registry_storage_kind=openstack
openshift_hosted_registry_storage_access_modes=['ReadWriteOnce']
openshift_hosted_registry_storage_openstack_filesystem=ext4
openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57
openshift_hosted_registry_storage_volume_size=10Gi
Option D: AWS or Another S3 Storage Solution

The simple storage solution (S3) bucket must already exist.

[OSEv3:vars]

#openshift_hosted_registry_storage_kind=object
#openshift_hosted_registry_storage_provider=s3
#openshift_hosted_registry_storage_s3_accesskey=access_key_id
#openshift_hosted_registry_storage_s3_secretkey=secret_access_key
#openshift_hosted_registry_storage_s3_bucket=bucket_name
#openshift_hosted_registry_storage_s3_region=bucket_region
#openshift_hosted_registry_storage_s3_chunksize=26214400
#openshift_hosted_registry_storage_s3_rootdirectory=/registry
#openshift_hosted_registry_pullthrough=true
#openshift_hosted_registry_acceptschema2=true
#openshift_hosted_registry_enforcequota=true

If you use a different S3 service, such as Minio or ExoScale, also add the region endpoint parameter:

openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/
Option E: converged mode

Similar to configuring converged mode, Red Hat Gluster Storage can be configured to provide storage for an OpenShift Container Registry during the initial installation of the cluster to offer redundant and reliable storage for the registry.

Important

See converged mode Considerations for specific host preparations and prerequisites.

  1. In your inventory file, set the following variable under [OSEv3:vars] section, and adjust them as required for your configuration:

    [OSEv3:vars]
    ...
    openshift_hosted_registry_storage_kind=glusterfs 1
    openshift_hosted_registry_storage_volume_size=5Gi
    openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
    1
    Running the integrated OpenShift Container Registry, on infrastructure nodes is recommended. Infrastructure node are nodes dedicated to running applications deployed by administrators to provide services for the OpenShift Container Platform cluster.
  2. Add glusterfs_registry in the [OSEv3:children] section to enable the [glusterfs_registry] group:

    [OSEv3:children]
    masters
    nodes
    glusterfs_registry
  3. Add a [glusterfs_registry] section with entries for each storage node that will host the GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:

    <hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    [glusterfs_registry]
    node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
  4. Add the hosts listed under [glusterfs_registry] to the [nodes] group:

    [nodes]
    ...
    node11.example.com openshift_node_group_name="node-config-infra"
    node12.example.com openshift_node_group_name="node-config-infra"
    node13.example.com openshift_node_group_name="node-config-infra"
Option F: Google Cloud Storage (GCS) bucket on Google Compute Engine (GCE)

A GCS bucket must already exist.

[OSEv3:vars]

openshift_hosted_registry_storage_provider=gcs
openshift_hosted_registry_storage_gcs_bucket=bucket01
openshift_hosted_registry_storage_gcs_keyfile=test.key
openshift_hosted_registry_storage_gcs_rootdirectory=/registry
Option G: vSphere Volume with vSphere Cloud Provider (VCP)

The vSphere Cloud Provider must be configured with a datastore accessible by the OpenShift Container Platform nodes.

When using vSphere volume for the registry, you must set the storage access mode to ReadWriteOnce and the replica count to 1:

[OSEv3:vars]

openshift_hosted_registry_storage_kind=vsphere
openshift_hosted_registry_storage_access_modes=['ReadWriteOnce']
openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume']
openshift_hosted_registry_replicas=1

4.14. Configuring Global Proxy Options

If your hosts require use of a HTTP or HTTPS proxy in order to connect to external hosts, there are many components that must be configured to use the proxy, including masters, Docker, and builds. Node services only connect to the master API requiring no external access and therefore do not need to be configured to use a proxy.

In order to simplify this configuration, the following Ansible variables can be specified at a cluster or host level to apply these settings uniformly across your environment.

Note

See Configuring Global Build Defaults and Overrides for more information on how the proxy environment is defined for builds.

Table 4.9. Cluster Proxy Variables
VariablePurpose

openshift_http_proxy

This variable specifies the HTTP_PROXY environment variable for masters and the Docker daemon.

openshift_https_proxy

This variable specifices the HTTPS_PROXY environment variable for masters and the Docker daemon.

openshift_no_proxy

This variable is used to set the NO_PROXY environment variable for masters and the Docker daemon. Provide a comma-separated list of host names, domain names, or wildcard host names that do not use the defined proxy. By default, this list is augmented with the list of all defined OpenShift Container Platform host names.

The host names that do not use the defined proxy include:

  • Master and node host names. You must include the domain suffix.
  • Other internal host names. You must include the domain suffix.
  • etcd IP addresses. You must provide the IP address because etcd access is managed by IP address.
  • The container image registry IP address.
  • The Kubernetes IP address. This value is 172.30.0.1 by default and the openshift_portal_net parameter value if you provided one.
  • The cluster.local Kubernetes internal domain suffix.
  • The svc Kubernetes internal domain suffix.

openshift_generate_no_proxy_hosts

This boolean variable specifies whether or not the names of all defined OpenShift hosts and *.cluster.local are automatically appended to the NO_PROXY list. Defaults to true; set it to false to override this option.

openshift_builddefaults_http_proxy

This variable defines the HTTP_PROXY environment variable inserted into builds using the BuildDefaults admission controller. If you do not define this parameter but define the openshift_http_proxy parameter, the openshift_http_proxy value is used. Set the openshift_builddefaults_http_proxy value to False to disable default http proxy for builds regardless of the openshift_http_proxy value.

openshift_builddefaults_https_proxy

This variable defines the HTTPS_PROXY environment variable inserted into builds using the BuildDefaults admission controller. If you do not define this parameter but define the openshift_http_proxy parameter, the openshift_https_proxy value is used. Set the openshift_builddefaults_https_proxy value to False to disable default https proxy for builds regardless of the openshift_https_proxy value.

openshift_builddefaults_no_proxy

This variable defines the NO_PROXY environment variable inserted into builds using the BuildDefaults admission controller. Set the openshift_builddefaults_no_proxy value to False to disable default no proxy settings for builds regardless of the openshift_no_proxy value.

openshift_builddefaults_git_http_proxy

This variable defines the HTTP proxy used by git clone operations during a build, defined using the BuildDefaults admission controller. Set the openshift_builddefaults_git_http_proxy value to False to disable default http proxy for git clone operations during a build regardless of the openshift_http_proxy value.

openshift_builddefaults_git_https_proxy

This variable defines the HTTPS proxy used by git clone operations during a build, defined using the BuildDefaults admission controller. Set the openshift_builddefaults_git_https_proxy value to False to disable default https proxy for git clone operations during a build regardless of the openshift_https_proxy value.

4.15. Configuring the Firewall

Important
  • If you are changing the default firewall, ensure that each host in your cluster is using the same firewall type to prevent inconsistencies.
  • Do not use firewalld with the OpenShift Container Platform installed on Atomic Host. firewalld is not supported on Atomic host.
Note

While iptables is the default firewall, firewalld is recommended for new installations.

OpenShift Container Platform uses iptables as the default firewall, but you can configure your cluster to use firewalld during the install process.

Because iptables is the default firewall, OpenShift Container Platform is designed to have it configured automatically. However, iptables rules can break OpenShift Container Platform if not configured correctly. The advantages of firewalld include allowing multiple objects to safely share the firewall rules.

To use firewalld as the firewall for an OpenShift Container Platform installation, add the os_firewall_use_firewalld variable to the list of configuration variables in the Ansible host file at install:

[OSEv3:vars]
os_firewall_use_firewalld=True 1
1
Setting this variable to true opens the required ports and adds rules to the default zone, ensuring that firewalld is configured correctly.
Note

Using the firewalld default configuration comes with limited configuration options, and cannot be overridden. For example, while you can set up a storage network with interfaces in multiple zones, the interface that nodes communicate on must be in the default zone.

4.16. Configuring Session Options

Session options in the OAuth configuration are configurable in the inventory file. By default, Ansible populates a sessionSecretsFile with generated authentication and encryption secrets so that sessions generated by one master can be decoded by the others. The default location is /etc/origin/master/session-secrets.yaml, and this file will only be re-created if deleted on all masters.

You can set the session name and maximum number of seconds with openshift_master_session_name and openshift_master_session_max_seconds:

openshift_master_session_name=ssn
openshift_master_session_max_seconds=3600

If provided, openshift_master_session_auth_secrets and openshift_master_encryption_secrets must be equal length.

For openshift_master_session_auth_secrets, used to authenticate sessions using HMAC, it is recommended to use secrets with 32 or 64 bytes:

openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']

For openshift_master_encryption_secrets, used to encrypt sessions, secrets must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:

openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']

4.17. Configuring Custom Certificates

Custom serving certificates for the public host names of the OpenShift Container Platform API and web console can be deployed during cluster installation and are configurable in the inventory file.

Note

Configure custom certificates for the host name associated with the publicMasterURL, which you set as the openshift_master_cluster_public_hostname parameter value. Using a custom serving certificate for the host name associated with the masterURL (openshift_master_cluster_hostname) results in TLS errors because infrastructure components attempt to contact the master API using the internal masterURL host.

Certificate and key file paths can be configured using the openshift_master_named_certificates cluster variable:

openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "cafile": "/path/to/custom-ca1.crt"}]

File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed in the /etc/origin/master/named_certificates/ directory.

Ansible detects a certificate’s Common Name and Subject Alternative Names. Detected names can be overridden by providing the "names" key when setting openshift_master_named_certificates:

openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"], "cafile": "/path/to/custom-ca1.crt"}]

Certificates configured using openshift_master_named_certificates are cached on masters, meaning that each additional Ansible run with a different set of certificates results in all previously deployed certificates remaining in place on master hosts and in the master configuration file.

If you want to overwrite openshift_master_named_certificates with the provided value (or no value), specify the openshift_master_overwrite_named_certificates cluster variable:

openshift_master_overwrite_named_certificates=true

For a more complete example, consider the following cluster variables in an inventory file:

openshift_master_cluster_method=native
openshift_master_cluster_hostname=lb-internal.openshift.com
openshift_master_cluster_public_hostname=custom.openshift.com

To overwrite the certificates on a subsequent Ansible run, set the following parameter values:

openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key", "names": ["custom.openshift.com"], "cafile": "/root/ca-file.crt"}]
openshift_master_overwrite_named_certificates=true
Important

The cafile certificate is imported to the ca-bundle.crt file on the masters during installation or during redeployment of certificates. The ca-bundle.crt file is mounted to every pod that runs in OpenShift Container Platform. Several OpenShift Container Platform components automatically trust the named certificates by default when they access the masterPublicURL endpoint. If you omit the cafile option from the certificates parameter, the functionality of Web Console and several other components is reduced.

4.18. Configuring Certificate Validity

By default, the certificates used to govern the etcd, master, and kubelet expire after two to five years. The validity (length in days until they expire) for the auto-generated registry, CA, node, and master certificates can be configured during installation using the following variables (default values shown):

[OSEv3:vars]

openshift_hosted_registry_cert_expire_days=730
openshift_ca_cert_expire_days=1825
openshift_master_cert_expire_days=730
etcd_ca_default_days=1825

These values are also used when redeploying certificates via Ansible post-installation.

4.19. Configuring Cluster Monitoring

Prometheus Cluster Monitoring is set to automatically deploy. To prevent its automatic deployment, set the following:

[OSEv3:vars]

openshift_cluster_monitoring_operator_install=false

For more information on Prometheus Cluster Monitoring and its configuration, see Prometheus Cluster Monitoring documentation.

4.20. Configuring Cluster Metrics

Cluster metrics are not set to automatically deploy. Set the following to enable cluster metrics during cluster installation:

[OSEv3:vars]

openshift_metrics_install_metrics=true

The metrics public URL can be set during cluster installation using the openshift_metrics_hawkular_hostname Ansible variable, which defaults to:

https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics

If you alter this variable, ensure the host name is accessible via your router.

openshift_metrics_hawkular_hostname=hawkular-metrics.{{openshift_master_default_subdomain}}

Important

In accordance with upstream Kubernetes rules, metrics can be collected only on the default interface of eth0.

Note

You must set an openshift_master_default_subdomain value to deploy metrics.

4.20.1. Configuring Metrics Storage

The openshift_metrics_cassandra_storage_type variable must be set in order to use persistent storage for metrics. If openshift_metrics_cassandra_storage_type is not set, then cluster metrics data is stored in an emptyDir volume, which will be deleted when the Cassandra pod terminates.

Important

Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes Cassandra for metrics storage. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.

Cassandra is designed to provide redundancy via multiple independent, instances. For this reason, using NFS or a SAN for data directories is an antipattern and is not recommended.

However, NFS/SAN implementations on the marketplace might not have issues backing or providing storage to this component. Contact the individual NFS/SAN implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.

There are three options for enabling cluster metrics storage during cluster installation:

Option A: Dynamic

If your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud provider, use the following variable:

[OSEv3:vars]

openshift_metrics_cassandra_storage_type=dynamic

If there are multiple default dynamically provisioned volume types, such as gluster-storage and glusterfs-storage-block, you can specify the provisioned volume type by variable. Use the following variables:

[OSEv3:vars]

openshift_metrics_cassandra_storage_type=pv
openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block

Check Volume Configuration for more information on using DynamicProvisioningEnabled to enable or disable dynamic provisioning.

Option B: NFS Host Group

When the following variables are set, an NFS volume is created during cluster installation with path <nfs_directory>/<volume_name> on the host in the [nfs] host group. For example, the volume path using these options is /exports/metrics:

[OSEv3:vars]

# nfs_directory must conform to DNS-1123 subdomain must consist of lower case
# alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character

openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_nfs_directory=/exports
openshift_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi
Option C: External NFS Host

To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.

[OSEv3:vars]

# nfs_directory must conform to DNS-1123 subdomain must consist of lower case
# alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character

openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_host=nfs.example.com
openshift_metrics_storage_nfs_directory=/exports
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi

The remote volume path using the following options is nfs.example.com:/exports/metrics.

Upgrading or Installing OpenShift Container Platform with NFS

The use of NFS for the core OpenShift Container Platform components is not recommended, as NFS (and the NFS Protocol) does not provide the proper consistency needed for the applications that make up the OpenShift Container Platform infrastructure.

As a result, the installer and update playbooks require an option to enable the use of NFS with core infrastructure components.

# Enable unsupported configurations, things that will yield a partially
# functioning cluster but would not be supported for production use
#openshift_enable_unsupported_configurations=false

If you see the following messages when upgrading or installing your cluster, then an additional step is required.

TASK [Run variable sanity checks] **********************************************
fatal: [host.example.com]: FAILED! => {"failed": true, "msg": "last_checked_host: host.example.com, last_checked_var: openshift_hosted_registry_storage_kind;nfs is an unsupported type for openshift_hosted_registry_storage_kind. openshift_enable_unsupported_configurations=True mustbe specified to continue with this configuration."}

In your Ansible inventory file, specify the following parameter:

[OSEv3:vars]
openshift_enable_unsupported_configurations=True

4.21. Configuring Cluster Logging

Cluster logging is not set to automatically deploy by default. Set the following to enable cluster logging during cluster installation:

[OSEv3:vars]

openshift_logging_install_logging=true
Note

When installing cluster logging, you must also specify a node selector, such as openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} in the Ansible inventory file.

For more information on the available cluster logging variables, see Specifying Logging Ansible Variables.

4.21.1. Configuring Logging Storage

The openshift_logging_es_pvc_dynamic variable must be set in order to use persistent storage for logging. If openshift_logging_es_pvc_dynamic is not set, then cluster logging data is stored in an emptyDir volume, which will be deleted when the Elasticsearch pod terminates.

Important

Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes ElasticSearch for logging storage. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.

Due to ElasticSearch not implementing a custom deletionPolicy, the use of NFS storage as a volume or a persistent volume is not supported for Elasticsearch storage, as Lucene and the default deletionPolicy, relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.

NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing they might have performed against these OpenShift core components.

There are three options for enabling cluster logging storage during cluster installation:

Option A: Dynamic

If your OpenShift Container Platform environment has dynamic volume provisioning, it could be configured either via the cloud provider or by an independent storage provider. For instance, the cloud provider could have a StorageClass with provisioner kubernetes.io/gce-pd on GCE, and an independent storage provider such as GlusterFS could have a StorageClass with provisioner kubernetes.io/glusterfs. In either case, use the following variable:

[OSEv3:vars]

openshift_logging_es_pvc_dynamic=true

For additional information on dynamic provisioning, see Dynamic provisioning and creating storage classes.

If there are multiple default dynamically provisioned volume types, such as gluster-storage and glusterfs-storage-block, you can specify the provisioned volume type by variable. Use the following variables:

[OSEv3:vars]

openshift_logging_elasticsearch_storage_type=pvc
openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block

Check Volume Configuration for more information on using DynamicProvisioningEnabled to enable or disable dynamic provisioning.

Option B: NFS Host Group

When the following variables are set, an NFS volume is created during cluster installation with path <nfs_directory>/<volume_name> on the host in the [nfs] host group. For example, the volume path using these options is /exports/logging:

[OSEv3:vars]

# nfs_directory must conform to DNS-1123 subdomain must consist of lower case
# alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character

openshift_logging_storage_kind=nfs
openshift_logging_storage_access_modes=['ReadWriteOnce']
openshift_logging_storage_nfs_directory=/exports 1
openshift_logging_storage_nfs_options='*(rw,root_squash)' 2
openshift_logging_storage_volume_name=logging 3
openshift_logging_storage_volume_size=10Gi
openshift_enable_unsupported_configurations=true
openshift_logging_elasticsearch_storage_type=pvc
openshift_logging_es_pvc_size=10Gi
openshift_logging_es_pvc_storage_class_name=''
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_prefix=logging
1 2
These parameters work only with the /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml installation playbook. The parameters will not work with the /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml playbook.
3
The NFS volume name must be logging.
Option C: External NFS Host

To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.

[OSEv3:vars]

# nfs_directory must conform to DNS-1123 subdomain must consist of lower case
# alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character

openshift_logging_storage_kind=nfs
openshift_logging_storage_access_modes=['ReadWriteOnce']
openshift_logging_storage_host=nfs.example.com 1
openshift_logging_storage_nfs_directory=/exports 2
openshift_logging_storage_volume_name=logging 3
openshift_logging_storage_volume_size=10Gi
openshift_enable_unsupported_configurations=true
openshift_logging_elasticsearch_storage_type=pvc
openshift_logging_es_pvc_size=10Gi
openshift_logging_es_pvc_storage_class_name=''
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_prefix=logging
1 2
These parameters work only with the /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml installation playbook. The parameters will not work with the /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml playbook.
3
The NFS volume name must be logging.

The remote volume path using the following options is nfs.example.com:/exports/logging.

Upgrading or Installing OpenShift Container Platform with NFS

The use of NFS for the core OpenShift Container Platform components is not recommended, as NFS (and the NFS Protocol) does not provide the proper consistency needed for the applications that make up the OpenShift Container Platform infrastructure.

As a result, the installer and update playbooks require an option to enable the use of NFS with core infrastructure components.

# Enable unsupported configurations, things that will yield a partially
# functioning cluster but would not be supported for production use
#openshift_enable_unsupported_configurations=false

If you see the following messages when upgrading or installing your cluster, then an additional step is required.

TASK [Run variable sanity checks] **********************************************
fatal: [host.example.com]: FAILED! => {"failed": true, "msg": "last_checked_host: host.example.com, last_checked_var: openshift_hosted_registry_storage_kind;nfs is an unsupported type for openshift_hosted_registry_storage_kind. openshift_enable_unsupported_configurations=True mustbe specified to continue with this configuration."}

In your Ansible inventory file, specify the following parameter:

[OSEv3:vars]
openshift_enable_unsupported_configurations=True

4.22. Customizing Service Catalog Options

The service catalog is enabled by default during installation. Enabling the service broker allows you to register service brokers with the catalog. When the service catalog is enabled, the OpenShift Ansible broker and template service broker are both installed as well; see Configuring the OpenShift Ansible Broker and Configuring the Template Service Broker for more information. If you disable the service catalog, the OpenShift Ansible broker and template service broker are not installed.

To disable automatic deployment of the service catalog, set the following cluster variable in your inventory file:

openshift_enable_service_catalog=false

If you use your own registry, you must add:

  • openshift_service_catalog_image_prefix: When pulling the service catalog image, force the use of a specific prefix (for example, registry). You must provide the full registry name up to the image name.
  • openshift_service_catalog_image_version: When pulling the service catalog image, force the use of a specific image version.

For example:

openshift_service_catalog_image="docker-registry.default.example.com/openshift/ose-service-catalog:${version}"
openshift_service_catalog_image_prefix="docker-registry-default.example.com/openshift/ose-"
openshift_service_catalog_image_version="v3.9.30"

4.22.1. Configuring the OpenShift Ansible Broker

The OpenShift Ansible broker (OAB) is enabled by default during installation.

If you do not want to install the OAB, set the ansible_service_broker_install parameter value to false in the inventory file:

ansible_service_broker_install=false
Table 4.10. Service broker customization variables
VariablePurpose

openshift_service_catalog_image_prefix

Specify the prefix for the service catalog component image.

4.22.1.1. Configuring Persistent Storage for the OpenShift Ansible Broker

The OAB deploys its own etcd instance separate from the etcd used by the rest of the OpenShift Container Platform cluster. The OAB’s etcd instance requires separate storage using persistent volumes (PVs) to function. If no PV is available, etcd will wait until the PV can be satisfied. The OAB application will enter a CrashLoop state until its etcd instance is available.

Some Ansible playbook bundles (APBs) also require a PV for their own usage in order to deploy. For example, each of the database APBs have two plans: the Development plan uses ephemeral storage and does not require a PV, while the Production plan is persisted and does require a PV.

APBPV Required?

postgresql-apb

Yes, but only for the Production plan

mysql-apb

Yes, but only for the Production plan

mariadb-apb

Yes, but only for the Production plan

mediawiki-apb

Yes

To configure persistent storage for the OAB:

Note

The following example shows usage of an NFS host to provide the required PVs, but other persistent storage providers can be used instead.

  1. In your inventory file, add nfs to the [OSEv3:children] section to enable the [nfs] group:

    [OSEv3:children]
    masters
    nodes
    nfs
  2. Add a [nfs] group section and add the host name for the system that will be the NFS host:

    [nfs]
    master1.example.com
  3. Add the following in the [OSEv3:vars] section:

    # nfs_directory must conform to DNS-1123 subdomain must consist of lower case
    # alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character
    
    openshift_hosted_etcd_storage_kind=nfs
    openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)"
    openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd 1
    openshift_hosted_etcd_storage_volume_name=etcd-vol2 2
    openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]
    openshift_hosted_etcd_storage_volume_size=1G
    openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
    1 2
    An NFS volume will be created with path <nfs_directory>/<volume_name> on the host in the [nfs] group. For example, the volume path using these options is /opt/osev3-etcd/etcd-vol2.

    These settings create a persistent volume that is attached to the OAB’s etcd instance during cluster installation.

4.22.1.2. Configuring the OpenShift Ansible Broker for Local APB Development

In order to do APB development with the OpenShift Container Registry in conjunction with the OAB, a whitelist of images the OAB can access must be defined. If a whitelist is not defined, the broker will ignore APBs and users will not see any APBs available.

By default, the whitelist is empty so that a user cannot add APB images to the broker without a cluster administrator configuring the broker. To whitelist all images that end in -apb:

  1. In your inventory file, add the following to the [OSEv3:vars] section:

    ansible_service_broker_local_registry_whitelist=['.*-apb$']

4.22.2. Configuring the Template Service Broker

The template service broker (TSB) is enabled by default during installation.

If you do not want to install the TSB, set the template_service_broker_install parameter value to false:

template_service_broker_install=false

To configure the TSB, one or more projects must be defined as the broker’s source namespace(s) for loading templates and image streams into the service catalog. Set the source projects by modifying the following in your inventory file’s [OSEv3:vars] section:

openshift_template_service_broker_namespaces=['openshift','myproject']
Table 4.11. Template service broker customization variables
VariablePurpose

template_service_broker_prefix

Specify the prefix for the template service broker component image.

ansible_service_broker_image_prefix

Specify the prefix for the ansible service broker component image.

4.23. Configuring Web Console Customization

The following Ansible variables set master configuration options for customizing the web console. See Customizing the Web Console for more details on these customization options.

Table 4.12. Web Console Customization Variables
VariablePurpose

openshift_web_console_install

Determines whether to install the web console. Can be set to true or false. Defaults to true.

openshift_web_console_prefix

Specify the prefix for the web console images.

openshift_master_logout_url

Sets clusterInfo.logoutPublicURL in the web console configuration. See Changing the Logout URL for details. Example value: https://example.com/logout

openshift_web_console_extension_script_urls

Sets extensions.scriptURLs in the web console configuration. See Loading Extension Scripts and Stylesheets for details. Example value: ['https://example.com/scripts/menu-customization.js','https://example.com/scripts/nav-customization.js']

openshift_web_console_extension_stylesheet_urls

Sets extensions.stylesheetURLs in the web console configuration. See Loading Extension Scripts and Stylesheets for details. Example value: ['https://example.com/styles/logo.css','https://example.com/styles/custom-styles.css']

openshift_master_oauth_template

Sets the OAuth template in the master configuration. See Customizing the Login Page for details. Example value: ['/path/to/login-template.html']

openshift_master_metrics_public_url

Sets metricsPublicURL in the master configuration. See Setting the Metrics Public URL for details. Example value: https://hawkular-metrics.example.com/hawkular/metrics

openshift_master_logging_public_url

Sets loggingPublicURL in the master configuration. See Kibana for details. Example value: https://kibana.example.com

openshift_web_console_inactivity_timeout_minutes

Configurate the web console to log the user out automatically after a period of inactivity. Must be a whole number greater than or equal to 5, or 0 to disable the feature. Defaults to 0 (disabled).

openshift_web_console_cluster_resource_overrides_enabled

Boolean value indicating if the cluster is configured for overcommit. When true, the web console will hide fields for CPU request, CPU limit, and memory request when editing resource limits because you must set these values with the cluster resource override configuration.

openshift_web_console_enable_context_selector

Enable the context selector in the web console and admin console mastheads for quickly switching between the two consoles. Defaults to true when both consoles are installed.

4.24. Configuring the Cluster Console

The cluster console is an additional web interface like the web console, but focused on admin tasks. The cluster console supports many of the same common OpenShift Container Platform resources as the web console, but it also allows you to view metrics about the cluster and manage cluster-scoped resources such as nodes, persistent volumes, cluster roles, and custom resource definitions. The following variables can be used to customize the cluster console.

Table 4.13. Cluster Console Customization Variables
VariablePurpose

openshift_console_install

Determines whether to install the cluster console. Can be set to true or false. Defaults to true.

openshift_console_hostname

Sets the host name of the cluster console. Defaults to console.<openshift_master_default_subdomain>. If you alter this variable, ensure the host name is accessible via your router.

openshift_console_cert

Optional certificate to use for the cluster console route. This is only needed if using a custom host name.

openshift_console_key

Optional key to use for the cluster console route. This is only needed if using a custom host name.

openshift_console_ca

Optional CA to use for the cluster console route. This is only needed if using a custom host name.

openshift_base_path

Optional base path for the cluster console. If set, it should begin and end with a slash like /console/. Defaults to / (no base path).

openshift_console_auth_ca_file

Optional CA file to use to connect to the OAuth server. Defaults to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt. Typically, this value does not need to be changed. You must create a ConfigMap that contains the CA file and mount it into the console Deployment in the openshift-console namespace at the same location as the location that you specify.

4.25. Configuring the Operator Lifecycle Manager

Important

The Operator Framework is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

The Technology Preview Operator Framework includes the Operator Lifecycle Manager (OLM). You can optionally install the OLM during cluster installation by setting the following variables in your inventory file:

Note

Alternatively, the Technology Preview Operator Framework can be installed after cluster installation. See Installing Operator Lifecycle Manager using Ansible for separate instructions.

  1. Add the openshift_enable_olm variable in the [OSEv3:vars] section, setting it to true:

    openshift_enable_olm=true
  2. Add the openshift_additional_registry_credentials variable in the [OSEv3:vars] section, setting credentials required to pull the Operator containers:

    openshift_additional_registry_credentials=[{'host':'registry.connect.redhat.com','user':'<your_user_name>','password':'<your_password>','test_image':'mongodb/enterprise-operator:0.3.2'}]

    Set user and password to the credentials that you use to log in to the Red Hat Customer Portal at https://access.redhat.com.

    The test_image represents an image that will be used to test the credentials you provided.

After your cluster installation has completed successful, see Launching your first Operator for further steps on using the OLM as a cluster administrator during this Technology Preview phase.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.