此内容没有您所选择的语言版本。
Appendix A. Understanding the node_prep_inventory.yml file
The node_prep_inventory.yml
file is an example Ansible inventory file that you can use to prepare a replacement host for your Red Hat Hyperconverged Infrastructure for Virtualization cluster.
You can find this file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/node_prep_inventory.yml
on any hyperconverged host.
A.1. Configuration parameters for preparing a replacement node
A.1.1. Hosts to configure
- hc_nodes
A list of hyperconverged hosts that uses the back-end FQDN of the host, and the configuration details of those hosts. Configuration that is specific to a host is defined under that host’s back-end FQDN. Configuration that is common to all hosts is defined in the
vars:
section.hc_nodes: hosts: new-host-backend-fqdn.example.com: [configuration specific to this host] vars: [configuration common to all hosts]
A.1.2. Multipath devices
blacklist_mpath_devices
(optional)By default, Red Hat Virtualization Host enables multipath configuration, which provides unique multipath names and worldwide identifiers for all disks, even when disks do not have underlying multipath configuration. Include this section if you do not have multipath configuration so that the multipath device names are not used for listed devices. Disks that are not listed here are assumed to have multipath configuration available, and require the path format
/dev/mapper/<WWID>
instead of/dev/sdx
when defined in subsequent sections of the inventory file.On a server with four devices (
sda
,sdb
,sdc
andsdd
), the following configuration blacklists only two devices. The path format/dev/mapper/<WWID>
is expected for devices not in this list.hc_nodes: hosts: new-host-backend-fqdn.example.com: blacklist_mpath_devices: - sdb - sdc
ImportantDo not list encrypted devices (
luks_*
devices) inblacklist_mpath_devices
, as they require multipath configuration to work.
A.1.3. Deduplication and compression
gluster_infra_vdo
(optional)Include this section to define a list of devices to use deduplication and compression. These devices require the
/dev/mapper/<name>
path format when you define them as volume groups ingluster_infra_volume_groups
. Each device listed must have the following information:name
-
A short name for the VDO device, for example
vdo_sdc
. device
-
The device to use, for example,
/dev/sdc
. logicalsize
-
The logical size of the VDO volume. Set this to ten times the size of the physical disk, for example, if you have a 500 GB disk, set
logicalsize: '5000G'
. emulate512
-
If you use devices with a 4 KB block size, set this to
on
. slabsize
-
If the logical size of the volume is 1000 GB or larger, set this to
32G
. If the logical size is smaller than 1000 GB, set this to2G
. blockmapcachesize
-
Set this to
128M
. writepolicy
-
Set this to
auto
.
For example:
hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_vdo: - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', blockmapcachesize: '128M', writepolicy: 'auto' } - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '500G', emulate512: 'off', slabsize: '2G', blockmapcachesize: '128M', writepolicy: 'auto' }
A.1.4. Storage infrastructure
gluster_infra_volume_groups
(required)This section creates the volume groups that contain the logical volumes.
hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc
gluster_infra_mount_devices
(required)This section creates the logical volumes that form Gluster bricks.
hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd
gluster_infra_thinpools
(optional)This section defines logical thin pools for use by thinly provisioned volumes. Thin pools are not suitable for the
engine
volume, but can be used for thevmstore
anddata
volume bricks.vgname
- The name of the volume group that contains this thin pool.
thinpoolname
-
A name for the thin pool, for example,
gluster_thinpool_sdc
. thinpoolsize
- The sum of the sizes of all logical volumes to be created in this volume group.
poolmetadatasize
-
Set to
16G
; this is the recommended size for supported deployments.
hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'}
gluster_infra_cache_vars
(optional)This section defines cache logical volumes to improve performance for slow devices. A fast cache device is attached to a thin pool, and requires
gluster_infra_thinpool
to be defined.vgname
- The name of a volume group with a slow device that requires a fast external cache.
cachedisk
-
The paths of the slow and fast devices, separated with a comma, for example, to use a cache device
sde
with the slow devicesdb
, specify/dev/sdb,/dev/sde
. cachelvname
- A name for this cache logical volume.
cachethinpoolname
- The thin pool to which the fast cache volume is attached.
cachelvsize
- The size of the cache logical volume. Around 0.01% of this size is used for cache metadata.
cachemode
-
The cache mode. Valid values are
writethrough
andwriteback
.
hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_cache_vars: - vgname: gluster_vg_sdb cachedisk: /dev/sdb,/dev/sde cachelvname: cachelv_thinpool_sdb cachethinpoolname: gluster_thinpool_sdb cachelvsize: '250G' cachemode: writethrough
gluster_infra_thick_lvs
(required)The thickly provisioned logical volumes that are used to create bricks. Bricks for the
engine
volume must be thickly provisioned.vgname
- The name of the volume group that contains the logical volume.
lvname
- The name of the logical volume.
size
-
The size of the logical volume. The
engine
logical volume requires100G
.
hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G
gluster_infra_lv_logicalvols
(required)The thinly provisioned logical volumes that are used to create bricks.
vgname
- The name of the volume group that contains the logical volume.
thinpool
- The thin pool that contains the logical volume, if this volume is thinly provisioned.
lvname
- The name of the logical volume.
size
-
The size of the logical volume. The
engine
logical volume requires100G
.
hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G
gluster_infra_disktype
(required)Specifies the underlying hardware configuration of the disks. Set this to the value that matches your hardware:
RAID6
,RAID5
, orJBOD
.hc_nodes: vars: gluster_infra_disktype: RAID6
gluster_infra_diskcount
(required)Specifies the number of data disks in the RAID set. For a
JBOD
disk type, set this to1
.hc_nodes: vars: gluster_infra_diskcount: 10
gluster_infra_stripe_unit_size
(required)The stripe size of the RAID set in megabytes.
hc_nodes: vars: gluster_infra_stripe_unit_size: 256
gluster_features_force_varlogsizecheck
(required)Set this to
true
if you want to verify that your/var/log
partition has sufficient free space during the deployment process. It is important to have sufficient space for logs, but it is not required to verify space requirements at deployment time if you plan to monitor space requirements carefully.hc_nodes: vars: gluster_features_force_varlogsizecheck: false
gluster_set_selinux_labels
(required)Ensures that volumes can be accessed when SELinux is enabled. Set this to
true
if SELinux is enabled on this host.hc_nodes: vars: gluster_set_selinux_labels: true
A.1.5. Firewall and network infrastructure
gluster_infra_fw_ports
(required)A list of ports to open between all nodes, in the format
<port>/<protocol>
.hc_nodes: vars: gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp
gluster_infra_fw_permanent
(required)Ensures the ports listed in
gluster_infra_fw_ports
are open after nodes are rebooted. Set this totrue
for production use cases.hc_nodes: vars: gluster_infra_fw_permanent: true
gluster_infra_fw_state
(required)Enables the firewall. Set this to
enabled
for production use cases.hc_nodes: vars: gluster_infra_fw_state: enabled
gluster_infra_fw_zone
(required)Specifies the firewall zone to which these
gluster_infra_fw_\*
parameters are applied.hc_nodes: vars: gluster_infra_fw_zone: public
gluster_infra_fw_services
(required)A list of services to allow through the firewall. Ensure
glusterfs
is defined here.hc_nodes: vars: gluster_infra_fw_services: - glusterfs
A.2. Example node_prep_inventory.yml
# Section for Host Preparation Phase hc_nodes: hosts: # Host - The node which need to be prepared for replacement new-host-backend-fqdn.example.com: # Blacklist multipath devices which are used for gluster bricks # If you omit blacklist_mpath_devices it means all device will be whitelisted. # If the disks are not blacklisted, and then its taken that multipath configuration # exists in the server and one should provide /dev/mapper/<WWID> instead of /dev/sdx blacklist_mpath_devices: - sdb - sdc # Enable this section gluster_infra_vdo, if dedupe & compression is # required on that storage volume. # The variables refers to: # name - VDO volume name to be used # device - Disk name on which VDO volume to created # logicalsize - Logical size of the VDO volume.This value is 10 times # the size of the physical disk # emulate512 - VDO device is made as 4KB block sized storage volume(4KN) # slabsize - VDO slab size. If VDO logical size >= 1000G then # slabsize is 32G else slabsize is 2G # # Following VDO values are as per recommendation and treated as constants: # blockmapcachesize - 128M # writepolicy - auto # # gluster_infra_vdo: # - { name: vdo_sdc, device: /dev/sdc, logicalsize: 5000G, emulate512: off, slabsize: 32G, # blockmapcachesize: 128M, writepolicy: auto } # - { name: vdo_sdd, device: /dev/sdd, logicalsize: 3000G, emulate512: off, slabsize: 32G, # blockmapcachesize: 128M, writepolicy: auto } # When dedupe and compression is enabled on the device, # use pvname for that device as /dev/mapper/<vdo_device_name> # # The variables refers to: # vgname - VG to be created on the disk # pvname - Physical disk (/dev/sdc) or VDO volume (/dev/mapper/vdo_sdc) gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc - vgname: gluster_vg_sdd pvname: /dev/mapper/vdo_sdd gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd # 'thinpoolsize is the sum of sizes of all LVs to be created on that VG # In the case of VDO enabled, thinpoolsize is 10 times the sum of sizes # of all LVs to be created on that VG. Recommended values for # poolmetadatasize is 16GB and that should be considered exclusive of # thinpoolsize gluster_infra_thinpools: - {vgname: gluster_vg_sdc, thinpoolname: gluster_thinpool_sdc, thinpoolsize: 500G, poolmetadatasize: 16G} - {vgname: gluster_vg_sdd, thinpoolname: gluster_thinpool_sdd, thinpoolsize: 500G, poolmetadatasize: 16G} # Enable the following section if LVM cache is to enabled # Following are the variables: # vgname - VG with the slow HDD device that needs caching # cachedisk - Comma separated value of slow HDD and fast SSD # In this example, /dev/sdb is the slow HDD, /dev/sde is fast SSD # cachelvname - LV cache name # cachethinpoolname - Thinpool to which the fast SSD to be attached # cachelvsize - Size of cache data LV. This is the SSD_size - (1/1000) of SSD_size # 1/1000th of SSD space will be used by cache LV meta # cachemode - writethrough or writeback # gluster_infra_cache_vars: # - vgname: gluster_vg_sdb # cachedisk: /dev/sdb,/dev/sde # cachelvname: cachelv_thinpool_sdb # cachethinpoolname: gluster_thinpool_sdb # cachelvsize: 250G # cachemode: writethrough # Only the engine brick needs to be thickly provisioned # Engine brick requires 100GB of disk space gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G # Common configurations vars: # In case of IPv6 based deployment "gluster_features_enable_ipv6" needs to be enabled,below line needs to be uncommented, like: # gluster_features_enable_ipv6: true # Firewall setup gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs # Allowed values for gluster_infra_disktype - RAID6, RAID5, JBOD gluster_infra_disktype: RAID6 # gluster_infra_diskcount is the number of data disks in the RAID set. # Note for JBOD its 1 gluster_infra_diskcount: 10 gluster_infra_stripe_unit_size: 256 gluster_features_force_varlogsizecheck: false gluster_set_selinux_labels: true