搜索

22.3. 使用 OpenShift Ansible Playbook 置备 OpenShift Container Platform 实例

download PDF

完成部署配置部署主机后,我们将使用 Ansible 为部署 OpenShift Container Platform 准备环境。在以下小节中,配置了 Ansible,并修改某些 YAML 文件,以实现 OpenStack 部署成功的 OpenShift Container Platform。

22.3.1. 为置备准备清单

通过前面的步骤安装 openshift-ansible 软件包后,有一个 sample-inventory 目录,我们将复制到部署主机的 cloud-user 主目录。

在部署主机上,

$ cp -r /usr/share/ansible/openshift-ansible/playbooks/openstack/sample-inventory/ ~/inventory

在这个清单目录中,all.yml 文件包含所有必须设置的不同参数,才能成功置备 RHOCP 实例。OSEv3.yml 文件包含 all.yml 文件和可自定义的所有可用 OpenShift Container Platform 集群参数的一些引用。

22.3.1.1. OpenShiftSDN 所有 YAML 文件

all.yml 文件有许多选项,可以修改以符合您的特定需求。此文件中收集的信息适用于成功部署 OpenShift Container Platform 所需的实例的调配部分。务必要仔细审阅。本文档将提供所有 YAML 文件的精简版本,并专注于为成功部署而需要设置的最关键参数。

$ cat ~/inventory/group_vars/all.yml
---
openshift_openstack_clusterid: "openshift"
openshift_openstack_public_dns_domain: *"example.com"*
openshift_openstack_dns_nameservers: *["10.19.115.228"]*
openshift_openstack_public_hostname_suffix: "-public"
openshift_openstack_nsupdate_zone: "{{ openshift_openstack_public_dns_domain }}"

openshift_openstack_keypair_name: *"openshift"*
openshift_openstack_external_network_name: *"public"*

openshift_openstack_default_image_name: *"rhel75"*

## Optional (Recommended) - This removes the need for floating IPs
## on the OpenShift Cluster nodes
openshift_openstack_node_subnet_name: *<deployment-subnet-name>*
openshift_openstack_router_name: *<deployment-router-name>*
openshift_openstack_master_floating_ip: *false*
openshift_openstack_infra_floating_ip: *false*
openshift_openstack_compute_floating_ip: *false*
## End of Optional Floating IP section

openshift_openstack_num_masters: *3*
openshift_openstack_num_infra: *3*
openshift_openstack_num_cns: *0*
openshift_openstack_num_nodes: *2*

openshift_openstack_master_flavor: *"m1.master"*
openshift_openstack_default_flavor: *"m1.node"*

openshift_openstack_use_lbaas_load_balancer: *true*

openshift_openstack_docker_volume_size: "15"

# # Roll-your-own DNS
*openshift_openstack_external_nsupdate_keys:*
  public:
    *key_secret: '/alb8h0EAFWvb4i+CMA12w=='*
    *key_name: "update-key"*
    *key_algorithm: 'hmac-md5'*
    *server: '<ip-of-DNS>'*
  private:
    *key_secret: '/alb8h0EAFWvb4i+CMA12w=='*
    *key_name: "update-key"*
    *key_algorithm: 'hmac-md5'*
    *server: '<ip-of-DNS>'*

ansible_user: openshift

## cloud config
openshift_openstack_disable_root: true
openshift_openstack_user: openshift
注意

由于使用外部 DNS 服务器,私有和公共部分使用 DNS 服务器的公共 IP 地址,因为 DNS 服务器不驻留在 OpenStack 环境中。

以上以星号(*)括起的值需要根据您的 OpenStack 环境和 DNS 服务器进行修改。

要正确修改 All YAML 文件的 DNS 部分,登录到 DNS 服务器并执行以下命令捕获密钥名称、密钥算法和密钥 secret:

$ ssh <ip-of-DNS>
$ sudo -i
# cat /etc/named/<key-name.key>
key "update-key" {
	algorithm hmac-md5;
	secret "/alb8h0EAFWvb4i+CMA02w==";
};
注意

密钥名称可能有所不同,上面的只是一个示例。

22.3.1.2. KuryrSDN 所有 YAML 文件

以下 all.yml 文件启用 Kuryr SDN 而不是默认的 OpenShiftSDN。请注意,以下示例是密度的版本,务必要仔细检查默认模板。

$ cat ~/inventory/group_vars/all.yml
---
openshift_openstack_clusterid: "openshift"
openshift_openstack_public_dns_domain: *"example.com"*
openshift_openstack_dns_nameservers: *["10.19.115.228"]*
openshift_openstack_public_hostname_suffix: "-public"
openshift_openstack_nsupdate_zone: "{{ openshift_openstack_public_dns_domain }}"

openshift_openstack_keypair_name: *"openshift"*
openshift_openstack_external_network_name: *"public"*

openshift_openstack_default_image_name: *"rhel75"*

## Optional (Recommended) - This removes the need for floating IPs
## on the OpenShift Cluster nodes
openshift_openstack_node_subnet_name: *<deployment-subnet-name>*
openshift_openstack_router_name: *<deployment-router-name>*
openshift_openstack_master_floating_ip: *false*
openshift_openstack_infra_floating_ip: *false*
openshift_openstack_compute_floating_ip: *false*
## End of Optional Floating IP section

openshift_openstack_num_masters: *3*
openshift_openstack_num_infra: *3*
openshift_openstack_num_cns: *0*
openshift_openstack_num_nodes: *2*

openshift_openstack_master_flavor: *"m1.master"*
openshift_openstack_default_flavor: *"m1.node"*

## Kuryr configuration
openshift_use_kuryr: True
openshift_use_openshift_sdn: False
use_trunk_ports: True
os_sdn_network_plugin_name: cni
openshift_node_proxy_mode: userspace
kuryr_openstack_pool_driver: nested
openshift_kuryr_precreate_subports: 5

kuryr_openstack_public_net_id: *<public_ID>*

# To disable namespace isolation, comment out the next 2 lines
openshift_kuryr_subnet_driver: namespace
openshift_kuryr_sg_driver: namespace
# If you enable namespace isolation, `default` and `openshift-monitoring` become the
# global namespaces. Global namespaces can access all namespaces. All
# namespaces can access global namespaces.
# To make other namespaces global, include them here:
kuryr_openstack_global_namespaces: default,openshift-monitoring

# If OpenStack cloud endpoints are accessible over HTTPS, provide the CA certificate
kuryr_openstack_ca: *<path-to-ca-certificate>*

openshift_master_open_ports:
- service: dns tcp
  port: 53/tcp
- service: dns udp
  port: 53/udp
openshift_node_open_ports:
- service: dns tcp
  port: 53/tcp
- service: dns udp
  port: 53/udp

# To set the pod network CIDR range, uncomment the following property and set its value:
#
# openshift_openstack_kuryr_pod_subnet_prefixlen: 24
#
# The subnet prefix length value must be smaller than the CIDR value that is
# set in the inventory file as openshift_openstack_kuryr_pod_subnet_cidr.
# By default, this value is /24.

# openshift_portal_net is the range that OpenShift services and their associated Octavia
# load balancer VIPs use. Amphora VMs use Neutron ports in the range that is defined by
# openshift_openstack_kuryr_service_pool_start and openshift_openstack_kuryr_service_pool_end.
#
# The value of openshift_portal_net in the OSEv3.yml file must be within the range that is
# defined by openshift_openstack_kuryr_service_subnet_cidr. This range must be half
# of openshift_openstack_kuryr_service_subnet_cidr's range. This practice ensures that
# openshift_portal_net does not overlap with the range that load balancers' VMs use, which is
# defined by openshift_openstack_kuryr_service_pool_start and openshift_openstack_kuryr_service_pool_end.
#
# For reference only, copy the value in the next line from OSEv3.yml:
# openshift_portal_net: *"172.30.0.0/16"*

openshift_openstack_kuryr_service_subnet_cidr: *"172.30.0.0/15"*
openshift_openstack_kuryr_service_pool_start: *"172.31.0.1"*
openshift_openstack_kuryr_service_pool_end: *"172.31.255.253"*

# End of Kuryr configuration

openshift_openstack_use_lbaas_load_balancer: *true*

openshift_openstack_docker_volume_size: "15"

# # Roll-your-own DNS
*openshift_openstack_external_nsupdate_keys:*
  public:
    *key_secret: '/alb8h0EAFWvb4i+CMA12w=='*
    *key_name: "update-key"*
    *key_algorithm: 'hmac-md5'*
    *server: '<ip-of-DNS>'*
  private:
    *key_secret: '/alb8h0EAFWvb4i+CMA12w=='*
    *key_name: "update-key"*
    *key_algorithm: 'hmac-md5'*
    *server: '<ip-of-DNS>'*

ansible_user: openshift

## cloud config
openshift_openstack_disable_root: true
openshift_openstack_user: openshift
注意

如果使用命名空间隔离,Kuryr-controller 会为每个命名空间创建一个新的 Neutron 网络和子网。

注意

启用 Kuryr SDN 时不支持网络策略和节点端口服务。

注意

如果启用了 Kuryr,OpenShift Container Platform 服务会通过 OpenStack Octavia Amphora 虚拟机实现。

Octavia 不支持 UDP 负载均衡。不支持公开 UDP 端口的服务。

22.3.1.2.1. 配置全局命名空间访问权限

kuryr_openstack_global_namespace 参数包含定义全局命名空间的列表。默认情况下,只有 默认openshift-monitoring 命名空间包含在此列表中。

如果您要从 OpenShift Container Platform 3.11 的以前的 z-release 升级,请注意,从全局命名空间中对其他命名空间的访问由安全组 *-allow_from_default 控制。

虽然 remote_group_id 规则 可以从全局命名空间中控制对其他命名空间的访问,但使用该脚本可能会导致缩放和连接问题。要避免这些问题,请从在 *_allow_from_defaultremote_ip_prefix 处使用 remote_group_id:

  1. 在命令行中检索您的网络 subnetCIDR 值:

    $ oc get kuryrnets ns-default -o yaml | grep subnetCIDR
      subnetCIDR: 10.11.13.0/24
  2. 为这个范围创建 TCP 和 UDP 规则:

    $ openstack security group rule create --remote-ip 10.11.13.0/24 --protocol tcp openshift-ansible-openshift.example.com-allow_from_default
    $ openstack security group rule create --remote-ip 10.11.13.0/24 --protocol udp openshift-ansible-openshift.example.com-allow_from_default
  3. 删除使用 remote_group_id 的安全组规则:

    $ openstack security group show *-allow_from_default | grep remote_group_id
    $ openstack security group rule delete REMOTE_GROUP_ID
表 22.4. All YAML 文件中的变量描述
变量描述

openshift_openstack_clusterid

集群识别名称

openshift_openstack_public_dns_domain

公共 DNS 域名

openshift_openstack_dns_nameservers

DNS 名称服务器的 IP

openshift_openstack_public_hostname_suffix

在公共和私有的 DNS 记录中的节点主机名中添加后缀

openshift_openstack_nsupdate_zone

要使用 OCP 实例 IP 更新的区域

openshift_openstack_keypair_name

用于登录 OCP 实例的密钥对名称

openshift_openstack_external_network_name

OpenStack 公共网络名称

openshift_openstack_default_image_name

用于 OCP 实例的 OpenStack 镜像

openshift_openstack_num_masters

要部署的 master 节点数量

openshift_openstack_num_infra

要部署的基础架构节点数量

openshift_openstack_num_cns

要部署的容器原生存储节点数量

openshift_openstack_num_nodes

要部署的应用程序节点数量

openshift_openstack_master_flavor

用于 master 实例的 OpenStack 类别的名称

openshift_openstack_default_flavor

如果未指定的具体类别,则 Openstack 类别的名称用于所有实例。

openshift_openstack_use_lbaas_load_balancer

启用 Octavia 负载均衡器的布尔值(必须安装Octavia)

openshift_openstack_docker_volume_size

Docker 卷的最小大小(必需变量)

openshift_openstack_external_nsupdate_keys

使用实例 IP 地址更新 DNS

ansible_user

用于部署 OpenShift Container Platform 的 Ansible 用户。"openshift"是需要的名称,且不得更改。

openshift_openstack_disable_root

禁用 root 访问权限的布尔值

openshift_openstack_user

使用此用户创建的 OCP 实例

openshift_openstack_node_subnet_name

用于部署的现有 OpenShift 子网的名称。这应该与用于您的部署主机的子网名称相同。

openshift_openstack_router_name

用于部署的现有 OpenShift 路由器的名称。这应该与用于部署主机的路由器名称相同。

openshift_openstack_master_floating_ip

默认为 true。如果您不希望分配给 master 节点的浮动 IP,则必须设置为 false

openshift_openstack_infra_floating_ip

默认为 true。如果您不希望浮动 IP 分配给基础架构节点,则必须设置为 false

openshift_openstack_compute_floating_ip

默认为 true。如果您不希望分配给计算节点的浮动 IP,则必须设置为 false

openshift_use_openshift_sdn

如果要禁用 openshift-sdn,则必须设置为 false

openshift_use_kuryr

如果要启用 kuryr sdn,则必须设置为 true

use_trunk_ports

必须设置为 true,才能创建带有中继端口的 OpenStack 虚拟机( kuryr 必需)

os_sdn_network_plugin_name

选择 SDN 行为。为 kuryr 设置为 cni

openshift_node_proxy_mode

必须将 Kuryr 设置为 用户空间

openshift_master_open_ports

使用 Kuryr 时会在虚拟机上打开的端口

kuryr_openstack_public_net_id

由 Kuryr 的需求。从中获取 FIP 的公共 OpenStack 网络的 ID

openshift_kuryr_subnet_driver

Kuryr Subnet 驱动程序。必须是 命名空间,用于为每个命名空间创建一个子网

openshift_kuryr_sg_driver

Kuryr 安全组驱动程序。必须是 命名空间 才能隔离命名空间

kuryr_openstack_global_namespaces

用于命名空间隔离的全局命名空间。默认值为 default, openshift-monitoring

kuryr_openstack_ca

到云的 CA 证书的路径。如果 OpenStack 云端点可以通过 HTTPS 访问,则需要此项。

22.3.1.3. OSEv3 YAML 文件

OSEv3 YAML 文件指定与 OpenShift 安装相关的所有不同参数和自定义。

以下是文件的一个精简版本,其中包含成功部署所需的所有变量。根据特定 OpenShift Container Platform 部署所需的自定义,可能需要额外的变量。

$ cat ~/inventory/group_vars/OSEv3.yml
---

openshift_deployment_type: openshift-enterprise
openshift_release: v3.11
oreg_url: registry.access.redhat.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams: true
oreg_auth_user: <oreg_auth_user>
oreg_auth_password: <oreg_auth_pw>
# The following is required if you want to deploy the Operator Lifecycle Manager (OLM)
openshift_additional_registry_credentials: [{'host':'registry.connect.redhat.com','user':'REGISTRYCONNECTUSER','password':'REGISTRYCONNECTPASSWORD','test_image':'mongodb/enterprise-operator:0.3.2'}]

openshift_master_default_subdomain: "apps.{{ (openshift_openstack_clusterid|trim == '') | ternary(openshift_openstack_public_dns_domain, openshift_openstack_clusterid + '.' + openshift_openstack_public_dns_domain) }}"

openshift_master_cluster_public_hostname: "console.{{ (openshift_openstack_clusterid|trim == '') | ternary(openshift_openstack_public_dns_domain, openshift_openstack_clusterid + '.' + openshift_openstack_public_dns_domain) }}"

#OpenStack Credentials:
openshift_cloudprovider_kind: openstack
openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}"
openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}"
openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_PROJECT_NAME') }}"
openshift_cloudprovider_openstack_blockstorage_version: v2
openshift_cloudprovider_openstack_domain_name: "{{ lookup('env','OS_USER_DOMAIN_NAME') }}"
openshift_cloudprovider_openstack_conf_file: <path_to_local_openstack_configuration_file>

#Use Cinder volume for Openshift registry:
openshift_hosted_registry_storage_kind: openstack
openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce']
openshift_hosted_registry_storage_openstack_filesystem: xfs
openshift_hosted_registry_storage_volume_size: 30Gi


openshift_hosted_registry_storage_openstack_volumeID: d65209f0-9061-4cd8-8827-ae6e2253a18d
openshift_hostname_check: false
ansible_become: true

#Setting SDN (defaults to ovs-networkpolicy) not part of OSEv3.yml
#For more info, on which to choose, visit:
#https://docs.openshift.com/container-platform/3.11/architecture/networking/sdn.html#overview
networkPluginName: redhat/ovs-networkpolicy
#networkPluginName: redhat/ovs-multitenant

#Configuring identity providers with Ansible
#For initial cluster installations, the Deny All identity provider is configured
#by default. It is recommended to be configured with either htpasswd
#authentication, LDAP authentication, or Allowing all authentication (not recommended)
#For more info, visit:
#https://docs.openshift.com/container-platform/3.10/install_config/configuring_authentication.html#identity-providers-ansible
#Example of Allowing All
#openshift_master_identity_providers: [{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]


#Optional Metrics (uncomment below lines for installation)

#openshift_metrics_install_metrics: true
#openshift_metrics_cassandra_storage_type: dynamic
#openshift_metrics_storage_volume_size: 25Gi
#openshift_metrics_cassandra_nodeselector: {"node-role.kubernetes.io/infra":"true"}
#openshift_metrics_hawkular_nodeselector: {"node-role.kubernetes.io/infra":"true"}
#openshift_metrics_heapster_nodeselector: {"node-role.kubernetes.io/infra":"true"}

#Optional Aggregated Logging (uncomment below lines for installation)

#openshift_logging_install_logging: true
#openshift_logging_es_pvc_dynamic: true
#openshift_logging_es_pvc_size: 30Gi
#openshift_logging_es_cluster_size: 3
#openshift_logging_es_number_of_replicas: 1
#openshift_logging_es_nodeselector: {"node-role.kubernetes.io/infra":"true"}
#openshift_logging_kibana_nodeselector: {"node-role.kubernetes.io/infra":"true"}
#openshift_logging_curator_nodeselector: {"node-role.kubernetes.io/infra":"true"}

有关列出的任何变量的更多详情,请参阅 OpenShift-Ansible 主机清单示例

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.