このコンテンツは選択した言語では利用できません。
Chapter 5. Setting Up Storage Volumes
Warning
Note
yum groupinstall "Infiniband Support"
to install Infiniband packages.
Volume Types
- Distributed
- Distributes files across bricks in the volume.Use this volume type where scaling and redundancy requirements are not important, or provided by other hardware or software layers.See Section 5.5, “Creating Distributed Volumes” for additional information about this volume type.
- Replicated
- Replicates files across bricks in the volume.Use this volume type in environments where high-availability and high-reliability are critical.See Section 5.6, “Creating Replicated Volumes” for additional information about this volume type.
- Distributed Replicated
- Distributes files across replicated bricks in the volume.Use this volume type in environments where high-reliability and scalability are critical. This volume type offers improved read performance in most environments.See Section 5.7, “Creating Distributed Replicated Volumes” for additional information about this volume type.
- Arbitrated Replicated
- Replicates files across two bricks in a replica set, and replicates only metadata to the third brick.Use this volume type in environments where consistency is critical, but underlying storage space is at a premium.See Section 5.8, “Creating Arbitrated Replicated Volumes” for additional information about this volume type.
- Dispersed
- Disperses the file's data across the bricks in the volume.Use this volume type where you need a configurable level of reliability with a minimum space waste.See Section 5.9, “Creating Dispersed Volumes” for additional information about this volume type.
- Distributed Dispersed
- Distributes file's data across the dispersed sub-volume.Use this volume type where you need a configurable level of reliability with a minimum space waste.See Section 5.10, “Creating Distributed Dispersed Volumes” for additional information about this volume type.
5.1. Setting up Gluster Storage Volumes using gdeploy
- Setting-up the backend on several machines can be done from one's laptop/desktop. This saves time and scales up well when the number of nodes in the trusted storage pool increase.
- Flexibility in choosing the drives to configure. (sd, vd, ...).
- Flexibility in naming the logical volumes (LV) and volume groups (VG).
5.1.1. Getting Started
- Generate the passphrase-less SSH keys for the nodes which are going to be part of the trusted storage pool by running the following command:
# ssh-keygen -t rsa -N ''
- Set up key-based SSH authentication access between the gdeploy controller and servers by running the following command:
# ssh-copy-id -i root@server
Note
If you are using a Red Hat Gluster Storage node as the deployment node and not an external node, then the key-based SSH authentication must be set up for the Red Hat Gluster Storage node from where the installation is performed. - Enable the repository required to install Ansible by running the following command:
# subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
- Install
ansible
by executing the following command:# yum install ansible
- You must also ensure the following:
- Devices should be raw and unused
- Default system locale must be set to
en_US
For information on system locale, refer to the Setting the System Locale of the Red Hat Enterprise Linux 7 System Administrator's Guide. - For multiple devices, use multiple volume groups, thinpool, and thinvol in the
gdeploy
configuration file
- Using a node in a trusted storage pool
- Using a machine outside the trusted storage pool
The gdeploy
package is bundled as part of the initial installation of Red Hat Gluster Storage.
You must ensure that the Red Hat Gluster Storage is subscribed to the required channels. For more information see, Subscribing to the Red Hat Gluster Storage Server Channels in the Red Hat Gluster Storage 3.4 Installation Guide.
# yum install gdeploy
gdeploy
see, Installing Ansible to Support Gdeploy section in the Red Hat Gluster Storage 3.4 Installation Guide.
5.1.2. Setting up a Trusted Storage Pool
/usr/share/doc/gdeploy/examples/gluster.conf.sample
Note
# # Usage: # gdeploy -c 2x2-volume-create.conf # # This does backend setup first and then create the volume using the # setup bricks. # # [hosts] 10.70.46.13 10.70.46.17 # Common backend setup for 2 of the hosts. [backend-setup] devices=sdb,sdc vgs=vg1,vg2 pools=pool1,pool2 lvs=lv1,lv2 mountpoints=/rhgs/brick1,/rhgs/brick2 brick_dirs=/rhgs/brick1/b1,/rhgs/brick2/b2 # If backend-setup is different for each host # [backend-setup:10.70.46.13] # devices=sdb # brick_dirs=/rhgs/brick1 # # [backend-setup:10.70.46.17] # devices=sda,sdb,sdc # brick_dirs=/rhgs/brick{1,2,3} # [volume] action=create volname=sample_volname replica=yes replica_count=2 force=yes [clients] action=mount volname=sample_volname hosts=10.70.46.15 fstype=glusterfs client_mount_points=/mnt/gluster
# gdeploy -c conf.txt
Note
/usr/share/doc/gdeploy/examples/gluster.conf.sample
. To invoke the new configuration file, run gdeploy -c /path_to_file/config.txt
command.
only
setup the backend see, Section 5.1.3, “Setting up the Backend ”
only
create a volume see, Section 5.1.4, “Creating Volumes”
only
mount clients see, Section 5.1.5, “Mounting Clients”
5.1.3. Setting up the Backend
/usr/share/doc/gdeploy/examples/gluster.conf.sample
- Using the [backend-setup] module
- Creating Physical Volume (PV), Volume Group (VG), and Logical Volume (LV) individually
Note
xfsprogs
package must be installed before setting up the backend bricks using gdeploy.
5.1.3.1. Using the [backend-setup] Module
- Generic
- Specific
If the disk names are uniform across the machines then backend setup can be written as below. The backend is setup for all the hosts in the `hosts’ section.
# # Usage: # gdeploy -c backend-setup-generic.conf # # This configuration creates backend for GlusterFS clusters # [hosts] 10.70.46.130 10.70.46.32 10.70.46.110 10.70.46.77 # Backend setup for all the nodes in the `hosts' section. This will create # PV, VG, and LV with gdeploy generated names. [backend-setup] devices=vdb
If the disks names vary across the machines in the cluster then backend setup can be written for specific machines with specific disk names. gdeploy is quite flexible in allowing to do host specific setup in a single configuration file.
# # Usage: # gdeploy -c backend-setup-hostwise.conf # # This configuration creates backend for GlusterFS clusters # [hosts] 10.70.46.130 10.70.46.32 10.70.46.110 10.70.46.77 # Backend setup for 10.70.46.77 with default gdeploy generated names for # Volume Groups and Logical Volumes. Volume names will be GLUSTER_vg1, # GLUSTER_vg2... [backend-setup:10.70.46.77] devices=vda,vdb # Backend setup for remaining 3 hosts in the `hosts' section with custom names # for Volumes Groups and Logical Volumes. [backend-setup:10.70.46.{130,32,110}] devices=vdb,vdc,vdd vgs=vg1,vg2,vg3 pools=pool1,pool2,pool3 lvs=lv1,lv2,lv3 mountpoints=/rhgs/brick1,/rhgs/brick2,/rhgs/brick3 brick_dirs=/rhgs/brick1/b1,/rhgs/brick2/b2,/rhgs/brick3/b3
5.1.3.2. Creating Backend by Setting up PV, VG, and LV
[hosts] 10.70.46.130 10.70.46.32 [pv] action=create devices=vdb [vg1] action=create vgname=RHS_vg1 pvname=vdb [lv1] action=create vgname=RHS_vg1 lvname=engine_lv lvtype=thick size=10GB mount=/rhgs/brick1 [lv2] action=create vgname=RHS_vg1 poolname=lvthinpool lvtype=thinpool poolmetadatasize=200MB chunksize=1024k size=30GB [lv3] action=create lvname=lv_vmaddldisks poolname=lvthinpool vgname=RHS_vg1 lvtype=thinlv mount=/rhgs/brick2 virtualsize=9GB [lv4] action=create lvname=lv_vmrootdisks poolname=lvthinpool vgname=RHS_vg1 size=19GB lvtype=thinlv mount=/rhgs/brick3 virtualsize=19GB
# # Extends a given given VG. pvname and vgname is mandatory, in this example the # vg `RHS_vg1' is extended by adding pv, vdd. If the pv is not alreay present, it # is created by gdeploy. # [hosts] 10.70.46.130 10.70.46.32 [vg2] action=extend vgname=RHS_vg1 pvname=vdd
5.1.4. Creating Volumes
/usr/share/doc/gdeploy/examples/gluster.conf.sample
[hosts] 10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 [volume] action=create volname=glustervol transport=tcp,rdma replica=yes replica_count=2 force=yes
# gdeploy -c conf.txt
Note
[hosts] 10.70.46.130 10.70.46.32 [backend-setup] devices=vdb,vdc mountpoints=/mnt/data1,/mnt/data2 [volume1] action=create volname=vol-one transport=tcp replica=yes replica_count=2 brick_dirs=/mnt/data1/1 [volume2] action=create volname=vol-two transport=tcp replica=yes replica_count=2 brick_dirs=/mnt/data2/2
[hosts] 10.70.46.130 10.70.46.32 [backend-setup] devices=vdb,vdc mountpoints=/mnt/data1,/mnt/data2 [volume1] action=create volname=vol-one transport=tcp replica=yes replica_count=2 key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm value=virt,36,36,on,512MB,32,full brick_dirs=/mnt/data1/1 [volume2] action=create volname=vol-two transport=tcp replica=yes key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm value=virt,36,36,on,512MB,32,full replica_count=2 brick_dirs=/mnt/data2/2
5.1.5. Mounting Clients
/usr/share/doc/gdeploy/examples/gluster.conf.sample
[clients] action=mount hosts=10.70.46.159 fstype=glusterfs client_mount_points=/mnt/gluster volname=10.0.0.1:glustervol
Note
fstype
is NFS, then mention it as nfs-version. By default it is 3.
# gdeploy -c conf.txt
5.1.6. Configuring a Volume
5.1.6.1. Adding and Removing a Brick
Modify the [volume] section in the configuration file to add a brick. For example:
[volume] action=add-brick volname=10.0.0.1:glustervol bricks=10.0.0.1:/rhgs/new_brick
# gdeploy -c conf.txt
Modify the [volume] section in the configuration file to remove a brick. For example:
[volume] action=remove-brick volname=10.0.0.1:glustervol bricks=10.0.0.2:/rhgs/brick state=commit
state
are stop, start, and force.
# gdeploy -c conf.txt
5.1.6.2. Rebalancing a Volume
[volume] action=rebalance volname=10.70.46.13:glustervol state=start
state
are stop, and fix-layout.
# gdeploy -c conf.txt
5.1.6.3. Starting, Stopping, or Deleting a Volume
Modify the [volume] section in the configuration file to start a volume. For example:
[volume] action=start volname=10.0.0.1:glustervol
# gdeploy -c conf.txt
Modify the [volume] section in the configuration file to start a volume. For example:
[volume] action=stop volname=10.0.0.1:glustervol
# gdeploy -c conf.txt
Modify the [volume] section in the configuration file to start a volume. For example:
[volume] action=delete volname=10.70.46.13:glustervol
# gdeploy -c conf.txt
5.1.7. Configuration File
- [hosts]
- [devices]
- [disktype]
- [diskcount]
- [stripesize]
- [vgs]
- [pools]
- [lvs]
- [mountpoints]
- [peer]
- [clients]
- [volume]
- [backend-setup]
- [pv]
- [vg]
- [lv]
- [RH-subscription]
- [yum]
- [shell]
- [update-file]
- [service]
- [script]
- [firewalld]
- hosts
This is a mandatory section which contains the IP address or hostname of the machines in the trusted storage pool. Each hostname or IP address should be listed in a separate line.
For example:[hosts] 10.0.0.1 10.0.0.2
- devices
This is a generic section and is applicable to all the hosts listed in the [hosts] section. However, if sections of hosts such as the [hostname] or [IP-address] is present, then the data in the generic sections like [devices] is ignored. Host specific data take precedence. This is an optional section.
For example:[devices] /dev/sda /dev/sdb
Note
When configuring the backend setup, the devices should be either listed in this section or in the host specific section. - disktype
This section specifies the disk configuration that is used while setting up the backend. gdeploy supports RAID 10, RAID 6, RAID 5, and JBOD configurations. This is an optional section and if the field is left empty, JBOD is taken as the default configuration. Valid values for this field are
raid10
,raid6
,raid5
, andjbod
.For example:[disktype] raid6
- diskcount
This section specifies the number of data disks in the setup. This is a mandatory field if a RAID disk type is specified under
[disktype]
. If the [disktype] is JBOD the [diskcount] value is ignored. This parameter is host specific.For example:[diskcount] 10
- stripesize
This section specifies the stripe_unit size in KB.
Case 1: This field is not necessary if the [disktype] is JBOD, and any given value will be ignored.Case 2: This is a mandatory field if [disktype] is specified as RAID 5 or RAID 6.For [disktype] RAID 10, the default value is taken as 256KB. Red Hat does not recommend changing this value. If you specify any other value the following warning is displayed:"Warning: We recommend a stripe unit size of 256KB for RAID 10"
Note
Do not add any suffixes like K, KB, M, etc. This parameter is host specific and can be added in the hosts section.For example:[stripesize] 128
- vgs
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the volume group names for the devices listed in [devices]. The number of volume groups in the [vgs] section should match the one in [devices]. If the volume group names are missing, the volume groups will be named as GLUSTER_vg{1, 2, 3, ...} as default.
For example:[vgs] CUSTOM_vg1 CUSTOM_vg2
- pools
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the pool names for the volume groups specified in the [vgs] section. The number of pools listed in the [pools] section should match the number of volume groups in the [vgs] section. If the pool names are missing, the pools will be named as GLUSTER_pool{1, 2, 3, ...}.
For example:[pools] CUSTOM_pool1 CUSTOM_pool2
- lvs
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section provides the logical volume names for the volume groups specified in [vgs]. The number of logical volumes listed in the [lvs] section should match the number of volume groups listed in [vgs]. If the logical volume names are missing, it is named as GLUSTER_lv{1, 2, 3, ...}.
For example:[lvs] CUSTOM_lv1 CUSTOM_lv2
- mountpoints
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the brick mount points for the logical volumes. The number of mount points should match the number of logical volumes specified in [lvs] If the mount points are missing, the mount points will be names as /gluster/brick{1, 2, 3…}.
For example:[mountpoints] /rhgs/brick1 /rhgs/brick2
- peer
This section specifies the configurations for the Trusted Storage Pool management (TSP). This section helps in making all the hosts specified in the [hosts] section to either probe each other to create the trusted storage pool or detach all of them from the trusted storage pool. The only option in this section is the option names 'action' which can have it's values to be either probe or detach.
For example:[peer] action=probe
- clients
This section specifies the client hosts and client_mount_points to mount the gluster storage volume created. The 'action' option is to be specified for the framework to determine the action that has to be performed. The options are 'mount' and 'unmount'. The Client hosts field is mandatory. If the mount points are not specified, default will be taken as /mnt/gluster for all the hosts.
The option fstype specifies how the gluster volume is to be mounted. Default is glusterfs (FUSE mount). The volume can also be mounted as NFS. Each client can have different types of volume mount, which has to be specified with a comma separated. The following fields are included:* action * hosts * fstype * client_mount_points
For example:[clients] action=mount hosts=10.0.0.10 fstype=nfs options=vers=3 client_mount_points=/mnt/rhs
- volume
The section specifies the configuration options for the volume. The following fields are included in this section:
* action * volname * transport * replica * replica_count * disperse * disperse_count * redundancy_count * force
- action
This option specifies what action must be performed in the volume. The choices can be [create, delete, add-brick, remove-brick].
create: This choice is used to create a volume.delete: If the delete choice is used, all the options other than 'volname' will be ignored.add-brick or remove-brick: If the add-brick or remove-brick is chosen, extra option bricks with a comma separated list of brick names(in the format <hostname>:<brick path> should be provided. In case of remove-brick, state option should also be provided specifying the state of the volume after brick removal. - volname
This option specifies the volume name. Default name is glustervol
Note
- In case of a volume operation, the 'hosts' section can be omitted, provided volname is in the format <hostname>:<volname>, where hostname is the hostname / IP of one of the nodes in the cluster
- Only single volume creation/deletion/configuration is supported.
- transport
This option specifies the transport type. Default is tcp. Options are tcp or rdma or tcp,rdma.
- replica
This option will specify if the volume should be of type replica. options are yes and no. Default is no. If 'replica' is provided as yes, the 'replica_count' should be provided.
- disperse
This option specifies if the volume should be of type disperse. Options are yes and no. Default is no.
- disperse_count
This field is optional even if 'disperse' is yes. If not specified, the number of bricks specified in the command line is taken as the disperse_count value.
- redundancy_count
If this value is not specified, and if 'disperse' is yes, it's default value is computed so that it generates an optimal configuration.
- force
This is an optional field and can be used during volume creation to forcefully create the volume.
For example:[volname] action=create volname=glustervol transport=tcp,rdma replica=yes replica_count=3 force=yes
- backend-setup
Available in gdeploy 2.0. This section sets up the backend for using with GlusterFS volume. If more than one backend-setup has to be done, they can be done by numbering the section like [backend-setup1], [backend-setup2], ...
backend-setup section supports the following variables:- devices: This replaces the [pvs] section in gdeploy 1.x. devices variable lists the raw disks which should be used for backend setup. For example:
[backend-setup] devices=sda,sdb,sdc
This is a mandatory field. - dalign:The Logical Volume Manager can use a portion of the physical volume for storing its metadata while the rest is used as the data portion. Align the I/O at the Logical Volume Manager (LVM) layer using the dalign option while creating the physical volume. For example:
[backend-setup] devices=sdb,sdc,sdd,sde dalign=256k
For JBOD, use an alignment value of 256K. For hardware RAID, the alignment value should be obtained by multiplying the RAID stripe unit size with the number of data disks. If 12 disks are used in a RAID 6 configuration, the number of data disks is 10; on the other hand, if 12 disks are used in a RAID 10 configuration, the number of data disks is 6.The following example is appropriate for 12 disks in a RAID 6 configuration with a stripe unit size of 128 KiB:[backend-setup] devices=sdb,sdc,sdd,sde dalign=1280k
The following example is appropriate for 12 disks in a RAID 10 configuration with a stripe unit size of 256 KiB:[backend-setup] devices=sdb,sdc,sdd,sde dalign=1536k
To view the previously configured physical volume settings for the dalign option, run thepvs -o +pe_start device
command. For example:# pvs -o +pe_start disk PV VG Fmt Attr PSize PFree 1st PE /dev/sdb lvm2 a-- 9.09t 9.09t 1.25m
You can also set the dalign option in the PV section. - vgs: This is an optional variable. This variable replaces the [vgs] section in gdeploy 1.x. vgs variable lists the names to be used while creating volume groups. The number of VG names should match the number of devices or should be left blank. gdeploy will generate names for the VGs. For example:
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3
A pattern can be provided for the vgs like custom_vg{1..3}, this will create three vgs.[backend-setup] devices=sda,sdb,sdc vgs=custom_vg{1..3}
- pools: This is an optional variable. The variable replaces the [pools] section in gdeploy 1.x. pools lists the thin pool names for the volume.
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3
Similar to vg, pattern can be provided for thin pool names. For example custom_pool{1..3} - lvs: This is an optional variable. This variable replaces the [lvs] section in gdeploy 1.x. lvs lists the logical volume name for the volume.
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3 lvs=custom_lv1,custom_lv2,custom_lv3
Patterns for LV can be provided similar to vg. For example custom_lv{1..3}. - mountpoints: This variable deprecates the [mountpoints] section in gdeploy 1.x. Mountpoints lists the mount points where the logical volumes should be mounted. Number of mount points should be equal to the number of logical volumes. For example:
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3 lvs=custom_lv1,custom_lv2,custom_lv3 mountpoints=/gluster/data1,/gluster/data2,/gluster/data3
- ssd - This variable is set if caching has to be added. For example, the backed setup with ssd for caching should be:
[backend-setup] ssd=sdc vgs=RHS_vg1 datalv=lv_data cachedatalv=lv_cachedata:1G cachemetalv=lv_cachemeta:230G
Note
Specifying the name of the data LV is necessary while adding SSD. Make sure the datalv is created already. Otherwise ensure to create it in one of the earlier `backend-setup’ sections.
- PV
Available in gdeploy 2.0. If the user needs to have more control over setting up the backend, and does not want to use backend-setup section, then pv, vg, and lv modules are to be used. The pv module supports the following variables.
- action: Mandatory. Supports two values, 'create' and 'resize'Example: Creating physical volumes
[pv] action=create devices=vdb,vdc,vdd
Example: Creating physical volumes on a specific host[pv:10.0.5.2] action=create devices=vdb,vdc,vdd
- devices: Mandatory. The list of devices to use for pv creation.
- expand: Used when
action=resize
.Example: Expanding an already created pv[pv] action=resize devices=vdb expand=yes
- shrink: Used when
action=resize
.Example: Shrinking an already created pv[pv] action=resize devices=vdb shrink=100G
- dalign:The Logical Volume Manager can use a portion of the physical volume for storing its metadata while the rest is used as the data portion. Align the I/O at the Logical Volume Manager (LVM) layer using the dalign option while creating the physical volume. For example:
[pv] action=create devices=sdb,sdc,sdd,sde dalign=256k
For JBOD, use an alignment value of 256K. For hardware RAID, the alignment value should be obtained by multiplying the RAID stripe unit size with the number of data disks. If 12 disks are used in a RAID 6 configuration, the number of data disks is 10; on the other hand, if 12 disks are used in a RAID 10 configuration, the number of data disks is 6.The following example is appropriate for 12 disks in a RAID 6 configuration with a stripe unit size of 128 KiB:[pv] action=create devices=sdb,sdc,sdd,sde dalign=1280k
The following example is appropriate for 12 disks in a RAID 10 configuration with a stripe unit size of 256 KiB:[pv] action=create devices=sdb,sdc,sdd,sde dalign=1536k
To view the previously configured physical volume settings for the dalign option, run thepvs -o +pe_start device
command. For example:# pvs -o +pe_start disk PV VG Fmt Attr PSize PFree 1st PE /dev/sdb lvm2 a-- 9.09t 9.09t 1.25m
You can also set the dalign option in the backend-setup section.
- VG
Available in gdeploy 2.0. This module is used to create and extend volume groups. The vg module supports the following variables.
- action - Action can be one of create or extend.
- pvname - PVs to use to create the volume. For more than one PV use comma separated values.
- vgname - The name of the vg. If no name is provided GLUSTER_vg will be used as default name.
- one-to-one - If set to yes, one-to-one mapping will be done between pv and vg.
If action is set to extend, the vg will be extended to include pv provided.Example1: Create a vg named images_vg with two PVs[vg] action=create vgname=images_vg pvname=sdb,sdc
Example2: Create two vgs named rhgs_vg1 and rhgs_vg2 with two PVs[vg] action=create vgname=rhgs_vg pvname=sdb,sdc one-to-one=yes
Example3: Extend an existing vg with the given disk.[vg] action=extend vgname=rhgs_images pvname=sdc
- LV
Available in gdeploy 2.0. This module is used to create, setup-cache, and convert logical volumes. The lv module supports the following variables:
action - The action variable allows three values `create’, `setup-cache’, `convert’, and `change’. If the action is 'create', the following options are supported:- lvname: The name of the logical volume, this is an optional field. Default is GLUSTER_lv
- poolname - Name of the thinpool volume name, this is an optional field. Default is GLUSTER_pool
- lvtype - Type of the logical volume to be created, allowed values are `thin’ and `thick’. This is an optional field, default is thick.
- size - Size of the logical volume volume. Default is to take all available space on the vg.
- extent - Extent size, default is 100%FREE
- force - Force lv create, do not ask any questions. Allowed values `yes’, `no’. This is an optional field, default is yes.
- vgname - Name of the volume group to use.
- pvname - Name of the physical volume to use.
- chunksize - The size of the chunk unit used for snapshots, cache pools, and thin pools. By default this is specified in kilobytes. For RAID 5 and 6 volumes, gdeploy calculates the default chunksize by multiplying the stripe size and the disk count. For RAID 10, the default chunksize is 256 KB. See Section 20.2, “Brick Configuration” for details.
Warning
Red Hat recommends using at least the default chunksize. If the chunksize is too small and your volume runs out of space for metadata, the volume is unable to create data. This includes the data required to increase the size of the metadata pool or to migrate data away from a volume that has run out of metadata space. Red Hat recommends monitoring your logical volumes to ensure that they are expanded or more storage created before metadata volumes become completely full. - poolmetadatasize - Sets the size of pool's metadata logical volume. Allocate the maximum chunk size (16 GiB) if possible. If you allocate less than the maximum, allocate at least 0.5% of the pool size to ensure that you do not run out of metadata space.
Warning
If your metadata pool runs out of space, you cannot create data. This includes the data required to increase the size of the metadata pool or to migrate data away from a volume that has run out of metadata space. Monitor your metadata pool using thelvs -o+metadata_percent
command and ensure that it does not run out of space. - virtualsize - Creates a thinly provisioned device or a sparse device of the given size
- mkfs - Creates a filesystem of the given type. Default is to use xfs.
- mkfs-opts - mkfs options.
- mount - Mount the logical volume.
If the action is setup-cache, the below options are supported:- ssd - Name of the ssd device. For example sda/vda/ … to setup cache.
- vgname - Name of the volume group.
- poolname - Name of the pool.
- cache_meta_lv - Due to requirements from dm-cache (the kernel driver), LVM further splits the cache pool LV into two devices - the cache data LV and cache metadata LV. Provide the cache_meta_lv name here.
- cache_meta_lvsize - Size of the cache meta lv.
- cache_lv - Name of the cache data lv.
- cache_lvsize - Size of the cache data.
- force - Force
If the action is convert, the below options are supported:- lvtype - type of the lv, available options are thin and thick
- force - Force the lvconvert, default is yes.
- vgname - Name of the volume group.
- poolmetadata - Specifies cache or thin pool metadata logical volume.
- cachemode - Allowed values writeback, writethrough. Default is writethrough.
- cachepool - This argument is necessary when converting a logical volume to a cache LV. Name of the cachepool.
- lvname - Name of the logical volume.
- chunksize - The size of the chunk unit used for snapshots, cache pools, and thin pools. By default this is specified in kilobytes. For RAID 5 and 6 volumes, gdeploy calculates the default chunksize by multiplying the stripe size and the disk count. For RAID 10, the default chunksize is 256 KB. See Section 20.2, “Brick Configuration” for details.
Warning
Red Hat recommends using at least the default chunksize. If the chunksize is too small and your volume runs out of space for metadata, the volume is unable to create data. Red Hat recommends monitoring your logical volumes to ensure that they are expanded or more storage created before metadata volumes become completely full. - poolmetadataspare - Controls creation and maintanence of pool metadata spare logical volume that will be used for automated pool recovery.
- thinpool - Specifies or converts logical volume into a thin pool's data volume. Volume’s name or path has to be given.
If the action is change, the below options are supported:- lvname - Name of the logical volume.
- vgname - Name of the volume group.
- zero - Set zeroing mode for thin pool.
Example 1: Create a thin LV[lv] action=create vgname=RHGS_vg1 poolname=lvthinpool lvtype=thinpool poolmetadatasize=200MB chunksize=1024k size=30GB
Example 2: Create a thick LV[lv] action=create vgname=RHGS_vg1 lvname=engine_lv lvtype=thick size=10GB mount=/rhgs/brick1
If there are more than one LVs, then the LVs can be created by numbering the LV sections, like [lv1], [lv2] … - RH-subscription
Available in gdeploy 2.0. This module is used to subscribe, unsubscribe, attach, enable repos etc. The RH-subscription module allows the following variables:
This module is used to subscribe, unsubscribe, attach, enable repos etc. The RH-subscription module allows the following variables:If the action is register, the following options are supported:- username/activationkey: Username or activationkey.
- password/activationkey: Password or activation key
- auto-attach: true/false
- pool: Name of the pool.
- repos: Repos to subscribe to.
- disable-repos: Repo names to disable. Leaving this option blank will disable all the repos.
- ignore_register_errors: If set to no, gdeploy will exit if system registration fails.
- If the action is attach-pool the following options are supported:pool - Pool name to be attached.ignore_attach_pool_errors - If set to no, gdeploy fails if attach-pool fails.
- If the action is enable-repos the following options are supported:repos - List of comma separated repos that are to be subscribed to.ignore_enable_errors - If set to no, gdeploy fails if enable-repos fail.
- If the action is disable-repos the following options are supported:repos - List of comma separated repos that are to be subscribed to.ignore_disable_errors - If set to no, gdeploy fails if disable-repos fail
- If the action is unregister the systems will be unregistered.ignore_unregister_errors - If set to no, gdeploy fails if unregistering fails.
Example 1: Subscribe to Red Hat Subscription network:[RH-subscription1] action=register username=qa@redhat.com password=<passwd> pool=<pool> ignore_register_errors=no
Example 2: Disable all the repos:[RH-subscription2] action=disable-repos repos=
Example 3: Enable a few repos[RH-subscription3] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rhel-7-server-rhev-mgmt-agent-rpms ignore_enable_errors=no
- yum
Available in gdeploy 2.0. This module is used to install or remove rpm packages, with the yum module we can add repos as well during the install time.
The action variable allows two values `install’ and `remove’.If the action is install the following options are supported:- packages - Comma separated list of packages that are to be installed.
- repos - The repositories to be added.
- gpgcheck - yes/no values have to be provided.
- update - Whether yum update has to be initiated.
If the action is remove then only one option has to be provided:- remove - The comma separated list of packages to be removed.
For example[yum1] action=install gpgcheck=no # Repos should be an url; eg: http://repo-pointing-glusterfs-builds repos=<glusterfs.repo>,<vdsm.repo> packages=vdsm,vdsm-gluster,ovirt-hosted-engine-setup,screen,gluster-nagios-addons,xauth update=yes
Install a package on a particular host.[yum2:host1] action=install gpgcheck=no packages=rhevm-appliance
- shell
Available in gdeploy 2.0. This module allows user to run shell commands on the remote nodes.
Currently shell provides a single action variable with value execute. And a command variable with any valid shell command as value.The below command will execute vdsm-tool on all the nodes.[shell] action=execute command=vdsm-tool configure --force
- update-file
Available in gdeploy 2.0. update-file module allows users to copy a file, edit a line in a file, or add new lines to a file. action variable can be any of copy, edit, or add.
When the action variable is set to copy, the following variables are supported.- src - The source path of the file to be copied from.
- dest - The destination path on the remote machine to where the file is to be copied to.
When the action variable is set to edit, the following variables are supported.- dest - The destination file name which has to be edited.
- replace - A regular expression, which will match a line that will be replaced.
- line - Text that has to be replaced.
When the action variable is set to add, the following variables are supported.- dest - File on the remote machine to which a line has to be added.
- line - Line which has to be added to the file. Line will be added towards the end of the file.
Example 1: Copy a file to a remote machine.[update-file] action=copy src=/tmp/foo.cfg dest=/etc/nagios/nrpe.cfg
Example 2: Edit a line in the remote machine, in the below example lines that have allowed_hosts will be replaced with allowed_hosts=host.redhat.com[update-file] action=edit dest=/etc/nagios/nrpe.cfg replace=allowed_hosts line=allowed_hosts=host.redhat.com
Example 3: Add a line to the end of a file[update-file] action=add dest=/etc/ntp.conf line=server clock.redhat.com iburst
- service
Available in gdeploy 2.0. The service module allows user to start, stop, restart, reload, enable, or disable a service. The action variable specifies these values.
When action variable is set to any of start, stop, restart, reload, enable, disable the variable servicename specifies which service to start, stop etc.- service - Name of the service to start, stop etc.
Example: enable and start ntp daemon.[service1] action=enable service=ntpd
[service2] action=restart service=ntpd
- script
Available in gdeploy 2.0. script module enables user to execute a script/binary on the remote machine. action variable is set to execute. Allows user to specify two variables file and args.
- file - An executable on the local machine.
- args - Arguments to the above program.
Example: Execute script disable-multipath.sh on all the remote nodes listed in `hosts’ section.[script] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
- firewalld
Available in gdeploy 2.0. firewalld module allows the user to manipulate firewall rules. action variable supports two values `add’ and `delete’. Both add and delete support the following variables:
- ports/services - The ports or services to add to firewall.
- permanent - Whether to make the entry permanent. Allowed values are true/false
- zone - Default zone is public
For example:[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp services=glusterfs
5.1.8. Deploying NFS Ganesha using gdeploy
5.1.8.1. Prerequisites
You must subscribe to subscription manager and obtain the NFS Ganesha packages before continuing further.
[RH-subscription1] action=register username=<user>@redhat.com password=<password> pool=<pool-id>
# gdeploy -c <config_file_name>
To enable the required repos, add the following details in the configuration file:
[RH-subscription2] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rh-gluster-3-nfs-for-rhel-7-server-rpms,rhel-ha-for-rhel-7-server-rpms,rhel-7-server-ansible-2-rpms
# gdeploy -c <config_file_name>
To enable the firewall ports, add the following details in the configuration file:
[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp services=glusterfs,nlm,nfs,rpc-bind,high-availability,mountd,rquota
Note
# gdeploy -c <config_file_name>
To install the required package, add the following details in the configuration file
[yum] action=install repolist= gpgcheck=no update=no packages=glusterfs-ganesha
# gdeploy -c <config_file_name>
5.1.8.2. Supported Actions
- Creating a Cluster
- Destroying a Cluster
- Adding a Node
- Deleting a Node
- Exporting a Volume
- Unexporting a Volume
- Refreshing NFS Ganesha Configuration
This action creates a fresh NFS-Ganesha setup on a given volume. For this action the nfs-ganesha in the configuration file section supports the following variables:
- ha-name: This is an optional variable. By default it is ganesha-ha-360.
- cluster-nodes: This is a required argument. This variable expects comma separated values of cluster node names, which is used to form the cluster.
- vip: This is a required argument. This variable expects comma separated list of ip addresses. These will be the virtual ip addresses.
- volname: This is an optional variable if the configuration contains the [volume] section
[hosts] host-1.example.com host-2.example.com [backend-setup] devices=/dev/vdb vgs=vg1 pools=pool1 lvs=lv1 mountpoints=/mnt/brick [firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,662/tcp,662/udp services=glusterfs,nlm,nfs,rpc-bind,high-availability,mountd,rquota [volume] action=create volname=ganesha transport=tcp replica_count=2 force=yes #Creating a high availability cluster and exporting the volume [nfs-ganesha] action=create-cluster ha-name=ganesha-ha-360 cluster-nodes=host-1.example.com,host-2.example.com vip=10.70.44.121,10.70.44.122 volname=ganesha ignore_ganesha_errors=no
# gdeploy -c <config_file_name>
The action, destroy-cluster cluster disables NFS Ganesha. It allows one variable, cluster-nodes
.
[hosts] host-1.example.com host-2.example.com # To destroy the high availability cluster [nfs-ganesha] action=destroy-cluster cluster-nodes=host-1.example.com,host-2.example.com
# gdeploy -c <config_file_name>
The add-node action allows three variables:
nodes
: Accepts a list of comma separated hostnames that have to be added to the clustervip
: Accepts a list of comma separated ip addresses.cluster_nodes
: Accepts a list of comma separated nodes of the NFS Ganesha cluster.
[hosts] host-1.example.com host-2.example.com host-3.example.com [peer] action=probe [clients] action=mount volname=host-3.example.com:gluster_shared_storage hosts=host-3.example.com fstype=glusterfs client_mount_points=/var/run/gluster/shared_storage/ [nfs-ganesha] action=add-node nodes=host-3.example.com cluster_nodes=host-1.example.com,host-2.example.com vip=10.0.0.33
# gdeploy -c <config_file_name>
The delete-node
action takes one variable, nodes
, which specifies the node or nodes to delete from the NFS Ganesha cluster in a comma delimited list.
[hosts] host-1.example.com host-2.example.com host-3.example.com host-4.example.com [nfs-ganesha] action=delete-node nodes=host-2.example.com
This action exports a volume. export-volume action supports one variable, volname
.
[hosts] host-1.example.com host-2.example.com [nfs-ganesha] action=export-volume volname=ganesha
# gdeploy -c <config_file_name>
This action unexports a volume. unexport-volume action supports one variable, volname
.
[hosts] host-1.example.com host-2.example.com [nfs-ganesha] action=unexport-volume volname=ganesha
# gdeploy -c <config_file_name>
This action will add/delete or add a config block to the configuration file and runs refresh-config
on the cluster.
refresh-config
supports the following variables:
- del-config-lines
- block-name
- volname
- ha-conf-dir
- update_config_lines
Note
refresh-config
with client block has few limitations:
- Works for only one client
- User cannot delete a line from a config block
[hosts] host1-example.com host2-example.com [nfs-ganesha] action=refresh-config # Default block name is `client' block-name=client config-block=clients = 10.0.0.1;|allow_root_access = true;|access_type = "RO";|Protocols = "2", "3";|anonymous_uid = 1440;|anonymous_gid = 72; volname=ganesha
# gdeploy -c <config_file_name>
[hosts] host1-example.com host2-example.com [nfs-ganesha] action=refresh-config del-config-lines=client volname=ganesha
# gdeploy -c <config_file_name>
[hosts] host1-example.com host2-example.com [nfs-ganesha] action=refresh-config volname=ganesha
# gdeploy -c <config_file_name>
[hosts] host1-example.com host2-example.com [nfs-ganesha] action=refresh-config update_config_lines=Access_type = "RO"; #update_config_lines=Protocols = "4"; #update_config_lines=clients = 10.0.0.1; volname=ganesha
# gdeploy -c <config_file_name>
5.1.9. Deploying Samba / CTDB using gdeploy
5.1.9.1. Prerequisites
You must subscribe to subscription manager and obtain the Samba packages before continuing further.
[RH-subscription1] action=register username=<user>@redhat.com password=<password> pool=<pool-id>
# gdeploy -c <config_file_name>
To enable the required repos, add the following details in the configuration file:
[RH-subscription2] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rh-gluster-3-samba-for-rhel-7-server-rpms,rhel-7-server-ansible-2-rpms
# gdeploy -c <config_file_name>
To enable the firewall ports, add the following details in the configuration file:
[firewalld] action=add ports=54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,4379/tcp services=glusterfs,samba,high-availability
# gdeploy -c <config_file_name>
To install the required package, add the following details in the configuration file
[yum] action=install repolist= gpgcheck=no update=no packages=samba,samba-client,glusterfs-server,ctdb
# gdeploy -c <config_file_name>
5.1.9.2. Setting up Samba
- Enabling Samba on an existing volume
- Enabling Samba while creating a volume
If a Red Hat Gluster Storage volume is already present, then the user has to mention the action as smb-setup
in the volume section. It is necessary to mention all the hosts that are in the cluster, as gdeploy updates the glusterd configuration files on each of the hosts.
[hosts] 10.70.37.192 10.70.37.88 [volume] action=smb-setup volname=samba1 force=yes smb_username=smbuser smb_mountpoint=/mnt/smb
Note
# gdeploy -c <config_file_name>
If Samba has be set up while creating a volume, the a variable smb
has to be set to yes in the configuration file.
[hosts] 10.70.37.192 10.70.37.88 [backend-setup] devices=/dev/vdb vgs=vg1 pools=pool1 lvs=lv1 mountpoints=/mnt/brick [volume] action=create volname=samba1 smb=yes force=yes smb_username=smbuser smb_mountpoint=/mnt/smb
# gdeploy -c <config_file_name>
Note
smb_username
and smb_mountpoint
are necessary if samba has to be setup with the acls set correctly.
5.1.9.3. Setting up CTDB
[hosts] 10.70.37.192 10.70.37.88 [volume] action=create volname=ctdb transport=tcp replica_count=2 force=yes [ctdb] action=setup public_address=10.70.37.6/24 eth0,10.70.37.8/24 eth0 volname=ctdb
ctdb_nodes
parameter, as shown in the following example.
[hosts] 10.70.37.192 10.70.37.88 [volume] action=create volname=ctdb transport=tcp replica_count=2 force=yes [ctdb] action=setup public_address=10.70.37.6/24 eth0,10.70.37.8/24 eth0 ctdb_nodes=192.168.1.1,192.168.2.5 volname=ctdb
# gdeploy -c <config_file_name>
5.1.10. Enabling SSL on a Volume
5.1.10.1. Creating a Volume and Enabling SSL
[hosts] 10.70.37.147 10.70.37.47 [backend-setup] devices=/dev/vdb vgs=vg1 pools=pool1 lvs=lv1 mountpoints=/mnt/brick [volume] action=create volname=vol1 transport=tcp replica_count=2 force=yes enable_ssl=yes ssl_clients=10.70.37.107,10.70.37.173 brick_dirs=/data/1 [clients] action=mount hosts=10.70.37.173,10.70.37.107 volname=vol1 fstype=glusterfs client_mount_points=/mnt/data
# gdeploy -c <config_file_name>
5.1.10.2. Enabling SSL on an Existing Volume:
[hosts] 10.70.37.147 10.70.37.47 # It is important for the clients to be unmounted before setting up SSL [clients1] action=unmount hosts=10.70.37.173,10.70.37.107 client_mount_points=/mnt/data [volume] action=enable-ssl volname=vol2 ssl_clients=10.70.37.107,10.70.37.173 [clients2] action=mount hosts=10.70.37.173,10.70.37.107 volname=vol2 fstype=glusterfs client_mount_points=/mnt/data
# gdeploy -c <config_file_name>
5.1.11. Limiting Gluster Resources
slice_setup=yes
when you start the glusterd service. This applies a set of resource limitations for the glusterd service and all of its child processes.
[hosts] 192.168.100.101 192.168.100.102 192.168.100.103 [service] action=start service=glusterd slice_setup=yes
systemctl
.
5.1.12. Gdeploy log files
/home/username/.gdeploy/logs/gdeploy.log
instead of the /var/log
directory.
GDEPLOY_LOGFILE
environment variable. For example, to set the gdeploy log location to /var/log/gdeploy/gdeploy.log
for this session, run the following command:
$ export GDEPLOY_LOGFILE=/var/log/gdeploy/gdeploy.log
/home/username/.bash_profile
file for that user.