此内容没有您所选择的语言版本。
Appendix B. Understanding the example configuration files
B.1. Understanding the luks_tang_inventory.yml
file
B.1.1. Configuration parameters for disk encryption
- hc_nodes (required)
A list of hyperconverged hosts that uses the back-end FQDN of the host, and the configuration details of those hosts. Configuration that is specific to a host is defined under that host’s back-end FQDN. Configuration that is common to all hosts is defined in the vars: section.
hc_nodes: hosts: host1backend.example.com: [configuration specific to this host] host2backend.example.com: host3backend.example.com: host4backend.example.com: host5backend.example.com: host6backend.example.com: vars: [configuration common to all hosts]
- blacklist_mpath_devices (optional)
By default, Red Hat Virtualization Host enables multipath configuration, which provides unique multipath names and worldwide identifiers for all disks, even when disks do not have underlying multipath configuration. Include this section if you do not have multipath configuration so that the multipath device names are not used for listed devices. Disks that are not listed here are assumed to have multipath configuration available, and require the path format
/dev/mapper/<WWID>
instead of/dev/sdx
when defined in subsequent sections of the inventory file.On a server with four devices (sda, sdb, sdc and sdd), the following configuration blacklists only two devices. The path format /dev/mapper/<WWID> is expected for devices not in this list.
hc_nodes: hosts: host1backend.example.com: blacklist_mpath_devices: - sdb - sdc
- gluster_infra_luks_devices (required)
A list of devices to encrypt and the encryption passphrase to use for each device.
hc_nodes: hosts: host1backend.example.com: gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: Str0ngPa55#
- devicename
-
The name of the device in the format
/dev/sdx
. - passphrase
- The password to use for this device when configuring encryption. After disk encryption with Network-Bound Disk Encryption (NBDE) is configured, a new random key is generated, providing greater security.
- rootpassphrase (required)
The password that you used when you selected Encrypt my data during operating system installation on this host.
hc_nodes: hosts: host1backend.example.com: rootpassphrase: h1-Str0ngPa55#
- rootdevice (required)
The root device that was encrypted when you selected Encrypt my data during operating system installation on this host.
hc_nodes: hosts: host1backend.example.com: rootdevice: /dev/sda2
- networkinterface (required)
The network interface this host uses to reach the NBDE key server.
hc_nodes: hosts: host1backend.example.com: networkinterface: ens3s0f0
- ip_version (required)
Whether to use IPv4 or IPv6 networking. Valid values are
IPv4
andIPv6
. There is no default value. Mixed networks are not supported.hc_nodes: vars: ip_version: IPv4
- ip_config_method (required)
Whether to use DHCP or static networking. Valid values are
dhcp
andstatic
. There is no default value.hc_nodes: vars: ip_config_method: dhcp
The other valid value for this option is
static
, which requires the following additional parameters and is defined individually for each host:hc_nodes: hosts: host1backend.example.com: ip_config_method: static host_ip_addr: 192.168.1.101 host_ip_prefix: 24 host_net_gateway: 192.168.1.100 host2backend.example.com: ip_config_method: static host_ip_addr: 192.168.1.102 host_ip_prefix: 24 host_net_gateway: 192.168.1.100 host3backend.example.com: ip_config_method: static host_ip_addr: 192.168.1.102 host_ip_prefix: 24 host_net_gateway: 192.168.1.100
- gluster_infra_tangservers
The address of your NBDE key server or servers, including
http://
. If your servers use a port other than the default (80), specify a port by appending:_port_
to the end of the URL.hc_nodes: vars: gluster_infra_tangservers: - url: http://key-server1.example.com - url: http://key-server2.example.com:80
B.1.2. Example luks_tang_inventory.yml
Dynamically allocated IP addresses
hc_nodes: hosts: host1-backend.example.com: blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host1-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host2-backend.example.com: blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host2-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host3-backend.example.com: blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host3-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 vars: ip_version: IPv4 ip_config_method: dhcp gluster_infra_tangservers: - url: http://key-server1.example.com:80 - url: http://key-server2.example.com:80
Static IP addresses
hc_nodes: hosts: host1-backend.example.com: blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host1-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway host2-backend.example.com: blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host2-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway host3-backend.example.com: blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host3-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway vars: ip_version: IPv4 ip_config_method: static gluster_infra_tangservers: - url: http://key-server1.example.com:80 - url: http://key-server2.example.com:80
B.2. Understanding the gluster_inventory.yml
file
The gluster_inventory.yml
file is an example Ansible inventory file that you can use to automate the deployment of Red Hat Hyperconverged Infrastructure for Virtualization using Ansible.
The single_node_gluster_inventory.yml
is the same as the gluster_inventory.yml
file. The only change is in the hosts section as there is only 1 host for a single node deployment.
You can find this file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/gluster_inventory.yml
on any hyperconverged host.
B.2.1. Default host groups
The gluster_inventory.yml
example file defines two host groups and their configuration in the YAML format. You can use these host groups directly if you want all nodes to host all storage domains.
- hc_nodes
A list of hyperconverged hosts that uses the back-end FQDN of the host, and the configuration details of those hosts. Configuration that is specific to a host is defined under that host’s back-end FQDN. Configuration that is common to all hosts is defined in the
vars:
section.hc_nodes: hosts: host1backend.example.com: [configuration specific to this host] host2backend.example.com: host3backend.example.com: host4backend.example.com: host5backend.example.com: host6backend.example.com: vars: [configuration common to all hosts]
- gluster
A list of hosts that uses the front-end FQDN of the host. These hosts serve as additional storage domain access points, so this list of nodes does not include the first host.
If you want all nodes to host all storage domains, place
storage_domains:
and all storage domain definitions under thevars:
section.gluster: hosts: host2frontend.example.com: host3frontend.example.com: host4frontend.example.com: host5frontend.example.com: host6frontend.example.com: vars: storage_domains: [storage domain definitions common to all hosts]
B.2.2. Configuration parameters for hyperconverged nodes
B.2.2.1. Multipath devices
blacklist_mpath_devices
(optional)By default, Red Hat Virtualization Host enables multipath configuration, which provides unique multipath names and worldwide identifiers for all disks, even when disks do not have underlying multipath configuration. Include this section if you do not have multipath configuration so that the multipath device names are not used for listed devices. Disks that are not listed here are assumed to have multipath configuration available, and require the path format
/dev/mapper/<WWID>
instead of/dev/sdx
when defined in subsequent sections of the inventory file.On a server with four devices (
sda
,sdb
,sdc
andsdd
), the following configuration blacklists only two devices. The path format/dev/mapper/<WWID>
is expected for devices not in this list.hc_nodes: hosts: host1backend.example.com: blacklist_mpath_devices: - sdb - sdc
ImportantDo not list encrypted devices (
luks_*
devices) inblacklist_mpath_devices
, as they require multipath configuration to work.
B.2.2.2. Deduplication and compression
gluster_infra_vdo
(optional)Include this section to define a list of devices to use deduplication and compression. These devices require the
/dev/mapper/<name>
path format when you define them as volume groups ingluster_infra_volume_groups
. Each device listed must have the following information:name
-
A short name for the VDO device, for example
vdo_sdc
. device
-
The device to use, for example,
/dev/sdc
. logicalsize
-
The logical size of the VDO volume. Set this to ten times the size of the physical disk, for example, if you have a 500 GB disk, set
logicalsize: '5000G'
. emulate512
-
If you use devices with a 4 KB block size, set this to
on
. slabsize
-
If the logical size of the volume is 1000 GB or larger, set this to
32G
. If the logical size is smaller than 1000 GB, set this to2G
. blockmapcachesize
-
Set this to
128M
. writepolicy
-
Set this to
auto
.
For example:
hc_nodes: hosts: host1backend.example.com: gluster_infra_vdo: - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', blockmapcachesize: '128M', writepolicy: 'auto' } - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '500G', emulate512: 'off', slabsize: '2G', blockmapcachesize: '128M', writepolicy: 'auto' }
B.2.2.3. Cluster definition
cluster_nodes
(required)Defines the list of nodes that are part of the cluster, using the back-end FQDN for each node and creates the cluster.
hc_nodes: vars: cluster_nodes: - host1backend.example.com - host2backend.example.com - host3backend.example.com
gluster_features_hci_cluster
(required)Identifies
cluster_nodes
as part of a hyperconverged cluster.hc_nodes: vars: gluster_features_hci_cluster: "{{ cluster_nodes }}"
gluster_features_hci_volumes
(required)Defines the layout of the Gluster volumes across the hyperconverged nodes.
volname
- The name of the Gluster volume to create.
brick
- The location at which to create the brick.
arbiter
-
Set to
1
for arbitrated volumes and0
for a fully replicated volume. servers
The list of back-end FQDN addresses for the hosts on which to create bricks for this volume.
There are two format options for this parameter. Only one of these formats is supported per deployment.
Format 1: Creates bricks for the specified volumes across all hosts
hc_nodes: vars: gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick: /gluster_bricks/data1/data1,/gluster_bricks/data2/data2 arbiter: 0 - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0
Format 2: Creates bricks for the specified volumes on specified hosts
hc_nodes: vars: gluster_features_hci_volumes: - volname: data brick: /gluster_bricks/data/data arbiter: 0 servers: - host4backend.example.com - host5backend.example.com - host6backend.example.com - host7backend.example.com - host8backend.example.com - host9backend.example.com - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0 servers: - host1backend.example.com - host2backend.example.com - host3backend.example.com
B.2.2.4. Storage infrastructure
gluster_infra_volume_groups
(required)This section creates the volume groups that contain the logical volumes.
hc_nodes: hosts: host1backend.example.com: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc
gluster_infra_mount_devices
(required)This section creates the logical volumes that form Gluster bricks.
hc_nodes: hosts: host1backend.example.com: gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd
gluster_infra_thinpools
(optional)This section defines logical thin pools for use by thinly provisioned volumes. Thin pools are not suitable for the
engine
volume, but can be used for thevmstore
anddata
volume bricks.vgname
- The name of the volume group that contains this thin pool.
thinpoolname
-
A name for the thin pool, for example,
gluster_thinpool_sdc
. thinpoolsize
- The sum of the sizes of all logical volumes to be created in this volume group.
poolmetadatasize
-
Set to
16G
; this is the recommended size for supported deployments.
hc_nodes: hosts: host1backend.example.com: gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'}
gluster_infra_cache_vars
(optional)This section defines cache logical volumes to improve performance for slow devices. A fast cache device is attached to a thin pool, and requires
gluster_infra_thinpool
to be defined.vgname
- The name of a volume group with a slow device that requires a fast external cache.
cachedisk
-
The paths of the slow and fast devices, separated with a comma, for example, to use a cache device
sde
with the slow devicesdb
, specify/dev/sdb,/dev/sde
. cachelvname
- A name for this cache logical volume.
cachethinpoolname
- The thin pool to which the fast cache volume is attached.
cachelvsize
- The size of the cache logical volume. Around 0.01% of this size is used for cache metadata.
cachemode
-
The cache mode. Valid values are
writethrough
andwriteback
.
hc_nodes: hosts: host1backend.example.com: gluster_infra_cache_vars: - vgname: gluster_vg_sdb cachedisk: /dev/sdb,/dev/sde cachelvname: cachelv_thinpool_sdb cachethinpoolname: gluster_thinpool_sdb cachelvsize: '250G' cachemode: writethrough
gluster_infra_thick_lvs
(required)The thickly provisioned logical volumes that are used to create bricks. Bricks for the
engine
volume must be thickly provisioned.vgname
- The name of the volume group that contains the logical volume.
lvname
- The name of the logical volume.
size
-
The size of the logical volume. The
engine
logical volume requires100G
.
hc_nodes: hosts: host1backend.example.com: gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G
gluster_infra_lv_logicalvols
(required)The thinly provisioned logical volumes that are used to create bricks.
vgname
- The name of the volume group that contains the logical volume.
thinpool
- The thin pool that contains the logical volume, if this volume is thinly provisioned.
lvname
- The name of the logical volume.
size
-
The size of the logical volume. The
engine
logical volume requires100G
.
hc_nodes: hosts: host1backend.example.com: gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G
gluster_infra_disktype
(required)Specifies the underlying hardware configuration of the disks. Set this to the value that matches your hardware:
RAID6
,RAID5
, orJBOD
.hc_nodes: vars: gluster_infra_disktype: RAID6
gluster_infra_diskcount
(required)Specifies the number of data disks in the RAID set. For a
JBOD
disk type, set this to1
.hc_nodes: vars: gluster_infra_diskcount: 10
gluster_infra_stripe_unit_size
(required)The stripe size of the RAID set in megabytes.
hc_nodes: vars: gluster_infra_stripe_unit_size: 256
gluster_features_force_varlogsizecheck
(required)Set this to
true
if you want to verify that your/var/log
partition has sufficient free space during the deployment process. It is important to have sufficient space for logs, but it is not required to verify space requirements at deployment time if you plan to monitor space requirements carefully.hc_nodes: vars: gluster_features_force_varlogsizecheck: false
gluster_set_selinux_labels
(required)Ensures that volumes can be accessed when SELinux is enabled. Set this to
true
if SELinux is enabled on this host.hc_nodes: vars: gluster_set_selinux_labels: true
Recommendation for LV size
Logical volume for engine brick must be a thick LV of size 100GB, other bricks created as thin LV reserving 16GB for thinpool metadata and 16GB reserved for spare metadata.
Example:
If the host has a disk of size 1TB, then engine brick size= 100GB ( thick LV ) Pool metadata size= 16GB Spare metadata size= 16GB Available space for thinpool= 1TB - ( 100GB + 16GB + 16GB ) = 868 GB
Other bricks for volumes can be created with the available thinpool storage space of 868GB, for example, vmstore brick with 200GB and data brick with 668GB.
B.2.2.5. Firewall and network infrastructure
gluster_infra_fw_ports
(required)A list of ports to open between all nodes, in the format
<port>/<protocol>
.hc_nodes: vars: gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp
gluster_infra_fw_permanent
(required)Ensures the ports listed in
gluster_infra_fw_ports
are open after nodes are rebooted. Set this totrue
for production use cases.hc_nodes: vars: gluster_infra_fw_permanent: true
gluster_infra_fw_state
(required)Enables the firewall. Set this to
enabled
for production use cases.hc_nodes: vars: gluster_infra_fw_state: enabled
gluster_infra_fw_zone
(required)Specifies the firewall zone to which these
gluster_infra_fw_\*
parameters are applied.hc_nodes: vars: gluster_infra_fw_zone: public
gluster_infra_fw_services
(required)A list of services to allow through the firewall. Ensure
glusterfs
is defined here.hc_nodes: vars: gluster_infra_fw_services: - glusterfs
B.2.2.6. Storage domains
storage_domains
(required)Creates the specified storage domains.
name
- The name of the storage domain to create.
host
- The front-end FQDN of the first host. Do not use the IP address.
address
- The back-end FQDN address of the first host. Do not use the IP address.
path
- The path of the Gluster volume that provides the storage domain.
function
-
Set this to
data
; this is the only supported type of storage domain. mount_options
-
Specifies additional mount options. The
backup-volfile-servers
option is required to specify the other hosts that provide the volume. Thexlator-option='transport.address-family=inet6'
option is required for IPv6 configurations.
IPv4 configuration
gluster: vars: storage_domains: - {"name":"data","host":"host1-frontend-network-FQDN","address":"host1-backend-network-FQDN","path":"/data","function":"data","mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN"} - {"name":"vmstore","host":"host1-frontend-network-FQDN","address":"host1-backend-network-FQDN","path":"/vmstore","function":"data","mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN"}
IPv6 configuration
gluster: vars: storage_domains: - {"name":"data","host":"host1-frontend-network-FQDN","address":"host1-backend-network-FQDN","path":"/data","function":"data","mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN,xlator-option='transport.address-family=inet6'"} - {"name":"vmstore","host":"host1-frontend-network-FQDN","address":"host1-backend-network-FQDN","path":"/vmstore","function":"data","mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN,xlator-option='transport.address-family=inet6'"}
B.2.3. Example gluster_inventory.yml
file
hc_nodes: hosts: # Host1 <host1-backend-network-FQDN>: # Blacklist multipath devices which are used for gluster bricks # If you omit blacklist_mpath_devices it means all device will be whitelisted. # If the disks are not blacklisted, and then its taken that multipath configuration # exists in the server and one should provide /dev/mapper/<WWID> instead of /dev/sdx blacklist_mpath_devices: - sdb - sdc # Enable this section 'gluster_infra_vdo', if dedupe & compression is # required on that storage volume. # The variables refers to: # name - VDO volume name to be used # device - Disk name on which VDO volume to created # logicalsize - Logical size of the VDO volume.This value is 10 times # the size of the physical disk # emulate512 - VDO device is made as 4KB block sized storage volume(4KN) # slabsize - VDO slab size. If VDO logical size >= 1000G then # slabsize is 32G else slabsize is 2G # # Following VDO values are as per recommendation and treated as constants: # blockmapcachesize - 128M # writepolicy - auto # # gluster_infra_vdo: # - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '3000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # When dedupe and compression is enabled on the device, # use pvname for that device as '/dev/mapper/<vdo_device_name> # # The variables refers to: # vgname - VG to be created on the disk # pvname - Physical disk (/dev/sdc) or VDO volume (/dev/mapper/vdo_sdc) gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc - vgname: gluster_vg_sdd pvname: /dev/mapper/vdo_sdd gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd # 'thinpoolsize' is the sum of sizes of all LVs to be created on that VG # In the case of VDO enabled, 'thinpoolsize' is 10 times the sum of sizes # of all LVs to be created on that VG. Recommended values for # 'poolmetadatasize' is 16GB and that should be considered exclusive of # 'thinpoolsize' gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'} # Enable the following section if LVM cache is to enabled # Following are the variables: # vgname - VG with the slow HDD device that needs caching # cachedisk - Comma separated value of slow HDD and fast SSD # In this example, /dev/sdb is the slow HDD, /dev/sde is fast SSD # cachelvname - LV cache name # cachethinpoolname - Thinpool to which the fast SSD to be attached # cachelvsize - Size of cache data LV. This is the SSD_size - (1/1000) of SSD_size # 1/1000th of SSD space will be used by cache LV meta # cachemode - writethrough or writeback # gluster_infra_cache_vars: # - vgname: gluster_vg_sdb # cachedisk: /dev/sdb,/dev/sde # cachelvname: cachelv_thinpool_sdb # cachethinpoolname: gluster_thinpool_sdb # cachelvsize: '250G' # cachemode: writethrough # Only the engine brick needs to be thickly provisioned # Engine brick requires 100GB of disk space gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G #Host2 <host2-backend-network-FQDN>: # Blacklist multipath devices which are used for gluster bricks # If you omit blacklist_mpath_devices it means all device will be whitelisted. # If the disks are not blacklisted, and then its taken that multipath configuration # exists in the server and one should provide /dev/mapper/<WWID> instead of /dev/sdx blacklist_mpath_devices: - sdb - sdc # Enable this section 'gluster_infra_vdo', if dedupe & compression is # required on that storage volume. # The variables refers to: # name - VDO volume name to be used # device - Disk name on which VDO volume to created # logicalsize - Logical size of the VDO volume.This value is 10 times # the size of the physical disk # emulate512 - VDO device is made as 4KB block sized storage volume(4KN) # slabsize - VDO slab size. If VDO logical size >= 1000G then # slabsize is 32G else slabsize is 2G # # Following VDO values are as per recommendation and treated as constants: # blockmapcachesize - 128M # writepolicy - auto # # gluster_infra_vdo: # - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '3000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # When dedupe and compression is enabled on the device, # use pvname for that device as '/dev/mapper/<vdo_device_name> # # The variables refers to: # vgname - VG to be created on the disk # pvname - Physical disk (/dev/sdc) or VDO volume (/dev/mapper/vdo_sdc) gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc - vgname: gluster_vg_sdd pvname: /dev/mapper/vdo_sdd gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd # 'thinpoolsize' is the sum of sizes of all LVs to be created on that VG # In the case of VDO enabled, 'thinpoolsize' is 10 times the sum of sizes # of all LVs to be created on that VG. Recommended values for # 'poolmetadatasize' is 16GB and that should be considered exclusive of # 'thinpoolsize' gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'} # Enable the following section if LVM cache is to enabled # Following are the variables: # vgname - VG with the slow HDD device that needs caching # cachedisk - Comma separated value of slow HDD and fast SSD # In this example, /dev/sdb is the slow HDD, /dev/sde is fast SSD # cachelvname - LV cache name # cachethinpoolname - Thinpool to which the fast SSD to be attached # cachelvsize - Size of cache data LV. This is the SSD_size - (1/1000) of SSD_size # 1/1000th of SSD space will be used by cache LV meta # cachemode - writethrough or writeback # gluster_infra_cache_vars: # - vgname: gluster_vg_sdb # cachedisk: /dev/sdb,/dev/sde # cachelvname: cachelv_thinpool_sdb # cachethinpoolname: gluster_thinpool_sdb # cachelvsize: '250G' # cachemode: writethrough # Only the engine brick needs to be thickly provisioned # Engine brick requires 100GB of disk space gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G #Host3 <host3-backend-network-FQDN>: # Blacklist multipath devices which are used for gluster bricks # If you omit blacklist_mpath_devices it means all device will be whitelisted. # If the disks are not blacklisted, and then its taken that multipath configuration # exists in the server and one should provide /dev/mapper/<WWID> instead of /dev/sdx blacklist_mpath_devices: - sdb - sdd # Enable this section 'gluster_infra_vdo', if dedupe & compression is # required on that storage volume. # The variables refers to: # name - VDO volume name to be used # device - Disk name on which VDO volume to created # logicalsize - Logical size of the VDO volume.This value is 10 times # the size of the physical disk # emulate512 - VDO device is made as 4KB block sized storage volume(4KN) # slabsize - VDO slab size. If VDO logical size >= 1000G then # slabsize is 32G else slabsize is 2G # # Following VDO values are as per recommendation and treated as constants: # blockmapcachesize - 128M # writepolicy - auto # # gluster_infra_vdo: # - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '3000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # When dedupe and compression is enabled on the device, # use pvname for that device as '/dev/mapper/<vdo_device_name> # # The variables refers to: # vgname - VG to be created on the disk # pvname - Physical disk (/dev/sdc) or VDO volume (/dev/mapper/vdo_sdc) gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc - vgname: gluster_vg_sdd pvname: /dev/mapper/vdo_sdd gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd # 'thinpoolsize' is the sum of sizes of all LVs to be created on that VG # In the case of VDO enabled, 'thinpoolsize' is 10 times the sum of sizes # of all LVs to be created on that VG. Recommended values for # 'poolmetadatasize' is 16GB and that should be considered exclusive of # 'thinpoolsize' gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'} # Enable the following section if LVM cache is to enabled # Following are the variables: # vgname - VG with the slow HDD device that needs caching # cachedisk - Comma separated value of slow HDD and fast SSD # In this example, /dev/sdb is the slow HDD, /dev/sde is fast SSD # cachelvname - LV cache name # cachethinpoolname - Thinpool to which the fast SSD to be attached # cachelvsize - Size of cache data LV. This is the SSD_size - (1/1000) of SSD_size # 1/1000th of SSD space will be used by cache LV meta # cachemode - writethrough or writeback # gluster_infra_cache_vars: # - vgname: gluster_vg_sdb # cachedisk: /dev/sdb,/dev/sde # cachelvname: cachelv_thinpool_sdb # cachethinpoolname: gluster_thinpool_sdb # cachelvsize: '250G' # cachemode: writethrough # Only the engine brick needs to be thickly provisioned # Engine brick requires 100GB of disk space gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G # Common configurations vars: # In case of IPv6 based deployment "gluster_features_enable_ipv6" needs to be enabled,below line needs to be uncommented, like: # gluster_features_enable_ipv6: true # Add the required hosts in the cluster. It can be 3,6,9 or 12 hosts cluster_nodes: - <host1-backend-network-FQDN> - <host2-backend-network-FQDN> - <host3-backend-network-FQDN> gluster_features_hci_cluster: "{{ cluster_nodes }}" # Create Gluster volumes for hyperconverged setup in 2 formats # format-1: Create bricks for gluster 1x3 replica volumes by default # on the first 3 hosts # format-2: Create bricks on the specified hosts, and it can create # nx3 distributed-replicated or distributed arbitrated # replicate volumes # Note: format-1 and format-2 are mutually exclusive (ie) either # format-1 or format-2 to be used. Don't mix the formats for # different volumes # Format-1 - Creates gluster 1x3 replicate or arbitrated replicate volume # - engine, vmstore, data with bricks on first 3 hosts gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick: /gluster_bricks/data/data arbiter: 0 - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0 # Format-2 - Allows to create nx3 volumes, with bricks on specified host #gluster_features_hci_volumes: # - volname: engine # brick: /gluster_bricks/engine/engine # arbiter: 0 # servers: # - host1 # - host2 # - host3 # # # Following creates 2x3 'Data' gluster volume with bricks on host4, # # host5, host6, host7, host8, host9 # - volname: data # brick: /gluster_bricks/data/data # arbiter: 0 # servers: # - host4 # - host5 # - host6 # - host7 # - host8 # - host9 # # # Following creates 2x3 'vmstore' gluster volume with 2 bricks for # # each host # - volname: vmstore # brick: /gluster_bricks/vmstore1/vmstore1,/gluster_bricks/vmstore2/vmstore2 # arbiter: 0 # servers: # - host1 # - host2 # - host3 # Firewall setup gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs # Allowed values for 'gluster_infra_disktype' - RAID6, RAID5, JBOD gluster_infra_disktype: RAID6 # 'gluster_infra_diskcount' is the number of data disks in the RAID set. # Note for JBOD its 1 gluster_infra_diskcount: 10 gluster_infra_stripe_unit_size: 256 gluster_features_force_varlogsizecheck: false gluster_set_selinux_labels: true ## Auto add hosts vars gluster: hosts: <host2-frontend-network-FQDN>: <host3-frontend-network-FQDN>: vars: storage_domains: - {"name":"data","host":"host1-frontend-network-FQDN","address":"host1-backend-network-FQDN","path":"/data","function":"data","mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN"} - {"name":"vmstore","host":"host1-frontend-network-FQDN","address":"host1-backend-network-FQDN","path":"/vmstore","function":"data","mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN"} # In case of IPv6 based deployment there is additional mount option required i.e. xlator-option="transport.address-family=inet6", below needs to be replaced with above one. # Ex: #storage_domains: #- {"name":"data","host":"host1-frontend-network-FQDN","address":"host1-backend-network-FQDN","path":"/data","function":"data","mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN,xlator-option="transport.address-family=inet6""} #- {"name":"vmstore","host":"host1-frontend-network-FQDN","address":"host1-backend-network-FQDN","path":"/vmstore","function":"data","mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN,xlator-option="transport.address-family=inet6""}
B.3. Understanding the he_gluster_vars.json
file
The he_gluster_vars.json
file is an example Ansible variable file. The variables in this file need to be defined in order to deploy Red Hat Hyperconverged Infrastructure for Virtualization.
You can find an example file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json
on any hyperconverged host.
Example he_gluster_vars.json
file
{ "he_appliance_password": "encrypt-password-using-ansible-vault", "he_admin_password": "UI-password-for-login", "he_domain_type": "glusterfs", "he_fqdn": "FQDN-for-Hosted-Engine", "he_vm_mac_addr": "Valid MAC address", "he_default_gateway": "Valid Gateway", "he_mgmt_network": "ovirtmgmt", "he_storage_domain_name": "HostedEngine", "he_storage_domain_path": "/engine", "he_storage_domain_addr": "host1-backend-network-FQDN", "he_mount_options": "backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN", "he_bridge_if": "interface name for bridge creation", "he_enable_hc_gluster_service": true, "he_mem_size_MB": "16384", "he_cluster": "Default", "he_vcpus": "4" }
Red Hat recommends encrypting this file. See Working with files encrypted using Ansible Vault for more information.
B.3.1. Required variables
he_appliance_password
- The password for the hosted engine. For a production cluster, use an encrypted value created with Ansible Vault.
he_admin_password
-
The password for the
admin
account of the hosted engine. For a production cluster, use an encrypted value created with Ansible Vault. he_domain_type
-
The type of storage domain. Set to
glusterfs
. he_fqdn
- The FQDN for the hosted engine virtual machine.
he_vm_mac_addr
- The MAC address for the appropriate network device of the hosted engine virtual machine. You can skip this option for hosted deployment with static IP configuration as in such cases the MAC address for Hosted Engine is automatically generated.
he_default_gateway
- The FQDN of the gateway to be used.
he_mgmt_network
-
The name of the management network. Set to
ovirtmgmt
. he_storage_domain_name
-
The name of the storage domain to create for the hosted engine. Set to
HostedEngine
. he_storage_domain_path
-
The path of the Gluster volume that provides the storage domain. Set to
/engine
. he_storage_domain_addr
-
The back-end FQDN of the first host providing the
engine
domain. he_mount_options
Specifies additional mount options.
For a three node deployment with IPv4 configurations, set:
"he_mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN"
The
he_mount_option
is not required for IPv4 based single node deployment of Red Hat Hyperconverged Infrastructure for Virtualization.For a three node deployment with IPv6 configurations, set:
"he_mount_options":"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN",xlator-option='transport.address-family=inet6'"
For a single node deployment with IPv6 configurations, set:
"he_mount_options":"xlator-option='transport.address-family=inet6'"
he_bridge_if
- The name of the interface to use for bridge creation.
he_enable_hc_gluster_service
-
Enables Gluster services. Set to
true
. he_mem_size_MB
- The amount of memory allocated to the hosted engine virtual machine in megabytes.
he_cluster
- The name of the cluster in which the hyperconverged hosts are placed.
he_vcpus
- The amount of CPUs used on the engine VM. By default 4 VCPUs are allocated for Hosted Engine Virtual Machine.
B.3.2. Required variables for static network configurations
DHCP configuration is used on the Hosted Engine VM by default. However, if you want to use static IP or FQDN, define the following variables:
he_vm_ip_addr
- Static IP address for Hosted Engine VM (IPv4 or IPv6).
he_vm_ip_prefix
- IP prefix for Hosted Engine VM (IPv4 or IPv6).
he_dns_addr
- DNS server for Hosted Engine VM (IPv4 or IPv6).
he_default_gateway
- Default gateway for Hosted Engine VM (IPv4 or IPv6).
he_vm_etc_hosts
-
Specifies Hosted Engine VM IP address and FQDN to
/etc/hosts
on the host, boolean value.
Example he_gluster_vars.json
file with static Hosted Engine configuration
{ "he_appliance_password": "mybadappliancepassword", "he_admin_password": "mybadadminpassword", "he_domain_type": "glusterfs", "he_fqdn": "engine.example.com", "he_vm_mac_addr": "00:01:02:03:04:05", "he_default_gateway": "gateway.example.com", "he_mgmt_network": "ovirtmgmt", "he_storage_domain_name": "HostedEngine", "he_storage_domain_path": "/engine", "he_storage_domain_addr": "host1-backend.example.com", "he_mount_options": "backup-volfile-servers=host2-backend.example.com:host3-backend.example.com", "he_bridge_if": "interface name for bridge creation", "he_enable_hc_gluster_service": true, "he_mem_size_MB": "16384", "he_cluster": "Default", "he_vm_ip_addr": "10.70.34.43", "he_vm_ip_prefix": "24", "he_dns_addr": "10.70.34.6", "he_default_gateway": "10.70.34.255", "he_vm_etc_hosts": "false", "he_network_test": "ping" }
If DNS is not available, use ping
for he_network_test
instead of dns
.
Example: "he_network_test": "ping"