Este conteúdo não está disponível no idioma selecionado.
Chapter 5. Setting Up Storage Volumes
Warning
Note
yum groupinstall "Infiniband Support"
to install Infiniband packages.
Volume Types
- Distributed
- Distributes files across bricks in the volume.Use this volume type where scaling and redundancy requirements are not important, or provided by other hardware or software layers.See Section 5.5, “Creating Distributed Volumes” for additional information about this volume type.
- Replicated
- Replicates files across bricks in the volume.Use this volume type in environments where high-availability and high-reliability are critical.See Section 5.6, “Creating Replicated Volumes” for additional information about this volume type.
- Distributed Replicated
- Distributes files across replicated bricks in the volume.Use this volume type in environments where high-reliability and scalability are critical. This volume type offers improved read performance in most environments.See Section 5.7, “Creating Distributed Replicated Volumes” for additional information about this volume type.
- Arbitrated Replicated
- Replicates files across bricks in the volume, except for every third brick, which stores only metadata.Use this volume type in environments where consistency is critical, but underlying storage space is at a premium.See Section 5.8, “Creating Arbitrated Replicated Volumes” for additional information about this volume type.
- Dispersed
- Disperses the file's data across the bricks in the volume.Use this volume type where you need a configurable level of reliability with a minimum space waste.See Section 5.9, “Creating Dispersed Volumes” for additional information about this volume type.
- Distributed Dispersed
- Distributes file's data across the dispersed sub-volume.Use this volume type where you need a configurable level of reliability with a minimum space waste.See Section 5.10, “Creating Distributed Dispersed Volumes” for additional information about this volume type.
5.1. Setting up Gluster Storage Volumes using gdeploy Copiar o linkLink copiado para a área de transferência!
- Setting-up the backend on several machines can be done from one's laptop/desktop. This saves time and scales up well when the number of nodes in the trusted storage pool increase.
- Flexibility in choosing the drives to configure. (sd, vd, ...).
- Flexibility in naming the logical volumes (LV) and volume groups (VG).
5.1.1. Getting Started Copiar o linkLink copiado para a área de transferência!
- Generate the passphrase-less SSH keys for the nodes which are going to be part of the trusted storage pool by running the following command:
ssh-keygen -f id_rsa -t rsa -N ''
# ssh-keygen -f id_rsa -t rsa -N ''
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Set up password-less SSH access between the gdeploy controller and servers by running the following command:
ssh-copy-id -i root@server
# ssh-copy-id -i root@server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If you are using a Red Hat Gluster Storage node as the deployment node and not an external node, then the password-less SSH must be set up for the Red Hat Gluster Storage node from where the installation is performed using the following command:ssh-copy-id -i root@localhost
# ssh-copy-id -i root@localhost
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Install
ansible
by executing the following command:- For Red Hat Gluster Storage 3.2.0 on Red Hat Enterprise Linux 7.2, execute the following command:
yum install ansible
# yum install ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- You must also ensure the following:
- Devices should be raw and unused
- For multiple devices, use multiple volume groups, thinpool and thinvol in the
gdeploy
configuration file
- Using a node in a trusted storage pool
- Using a machine outside the trusted storage pool
The gdeploy
package is bundled as part of the initial installation of Red Hat Gluster Storage.
You must ensure that the Red Hat Gluster Storage is subscribed to the required channels. For more information see, Subscribing to the Red Hat Gluster Storage Server Channels in the Red Hat Gluster Storage 3.2 Installation Guide.
yum install gdeploy
# yum install gdeploy
gdeploy
see, Installing Ansible to Support Gdeploy section in the Red Hat Gluster Storage 3.2 Installation Guide.
5.1.2. Setting up a Trusted Storage Pool Copiar o linkLink copiado para a área de transferência!
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
Note
gdeploy -c conf.txt
# gdeploy -c conf.txt
Note
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
. To invoke the new configuration file, run gdeploy -c /path_to_file/config.txt
command.
only
setup the backend see, Section 5.1.3, “Setting up the Backend ”
only
create a volume see, Section 5.1.4, “Creating Volumes”
only
mount clients see, Section 5.1.5, “Mounting Clients”
5.1.3. Setting up the Backend Copiar o linkLink copiado para a área de transferência!
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
- Using the [backend-setup] module
- Creating Physical Volume (PV), Volume Group (VG), and Logical Volume (LV) individually
Note
xfsprogs
package must be installed before setting up the backend bricks using gdeploy.
5.1.3.1. Using the [backend-setup] Module Copiar o linkLink copiado para a área de transferência!
- Generic
- Specific
If the disk names are uniform across the machines then backend setup can be written as below. The backend is setup for all the hosts in the `hosts’ section.
If the disks names vary across the machines in the cluster then backend setup can be written for specific machines with specific disk names. gdeploy is quite flexible in allowing to do host specific setup in a single configuration file.
5.1.3.2. Creating Backend by Setting up PV, VG, and LV Copiar o linkLink copiado para a área de transferência!
5.1.4. Creating Volumes Copiar o linkLink copiado para a área de transferência!
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
gdeploy -c conf.txt
# gdeploy -c conf.txt
Note
5.1.5. Mounting Clients Copiar o linkLink copiado para a área de transferência!
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
Note
fstype
is NFS, then mention it as nfs-version. By default it is 3.
gdeploy -c conf.txt
# gdeploy -c conf.txt
5.1.6. Configuring a Volume Copiar o linkLink copiado para a área de transferência!
5.1.6.1. Adding and Removing a Brick Copiar o linkLink copiado para a área de transferência!
Modify the [volume] section in the configuration file to add a brick. For example:
[volume] action=add-brick volname=10.0.0.1:glustervol bricks=10.0.0.1:/rhgs/new_brick
[volume]
action=add-brick
volname=10.0.0.1:glustervol
bricks=10.0.0.1:/rhgs/new_brick
gdeploy -c conf.txt
# gdeploy -c conf.txt
Modify the [volume] section in the configuration file to remove a brick. For example:
[volume] action=remove-brick volname=10.0.0.1:glustervol bricks=10.0.0.2:/rhgs/brick state=commit
[volume]
action=remove-brick
volname=10.0.0.1:glustervol
bricks=10.0.0.2:/rhgs/brick
state=commit
state
are stop, start, and force.
gdeploy -c conf.txt
# gdeploy -c conf.txt
5.1.6.2. Rebalancing a Volume Copiar o linkLink copiado para a área de transferência!
[volume] action=rebalance volname=10.70.46.13:glustervol state=start
[volume]
action=rebalance
volname=10.70.46.13:glustervol
state=start
state
are stop, and fix-layout.
gdeploy -c conf.txt
# gdeploy -c conf.txt
5.1.6.3. Starting, Stopping, or Deleting a Volume Copiar o linkLink copiado para a área de transferência!
Modify the [volume] section in the configuration file to start a volume. For example:
[volume] action=start volname=10.0.0.1:glustervol
[volume]
action=start
volname=10.0.0.1:glustervol
gdeploy -c conf.txt
# gdeploy -c conf.txt
Modify the [volume] section in the configuration file to start a volume. For example:
[volume] action=stop volname=10.0.0.1:glustervol
[volume]
action=stop
volname=10.0.0.1:glustervol
gdeploy -c conf.txt
# gdeploy -c conf.txt
Modify the [volume] section in the configuration file to start a volume. For example:
[volume] action=delete volname=10.70.46.13:glustervol
[volume]
action=delete
volname=10.70.46.13:glustervol
gdeploy -c conf.txt
# gdeploy -c conf.txt
5.1.7. Configuration File Copiar o linkLink copiado para a área de transferência!
- [hosts]
- [devices]
- [disktype]
- [diskcount]
- [stripesize]
- [vgs]
- [pools]
- [lvs]
- [mountpoints]
- {host-specific-data-for-above}
- [clients]
- [volume]
- [backend-setup]
- [pv]
- [vg]
- [lv]
- [RH-subscription]
- [yum]
- [shell]
- [update-file]
- [service]
- [script]
- [firewalld]
- hosts
This is a mandatory section which contains the IP address or hostname of the machines in the trusted storage pool. Each hostname or IP address should be listed in a separate line.
For example:[hosts] 10.0.0.1 10.0.0.2
[hosts] 10.0.0.1 10.0.0.2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - devices
This is a generic section and is applicable to all the hosts listed in the [hosts] section. However, if sections of hosts such as the [hostname] or [IP-address] is present, then the data in the generic sections like [devices] is ignored. Host specific data take precedence. This is an optional section.
For example:[devices] /dev/sda /dev/sdb
[devices] /dev/sda /dev/sdb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
When configuring the backend setup, the devices should be either listed in this section or in the host specific section. - disktype
This section specifies the disk configuration that is used while setting up the backend. gdeploy supports RAID 10, RAID 6, and JBOD configurations. This is an optional section and if the field is left empty, JBOD is taken as the default configuration.
For example:[disktype] raid6
[disktype] raid6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - diskcount
This section specifies the number of data disks in the setup. This is a mandatory field if the [disktype] specified is either RAID 10 or RAID 6. If the [disktype] is JBOD the [diskcount] value is ignored. This is a host specific data.
For example:[diskcount] 10
[diskcount] 10
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - stripesize
This section specifies the stripe_unit size in KB.
Case 1: This field is not necessary if the [disktype] is JBOD, and any given value will be ignored.Case 2: This is a mandatory field if [disktype] is specified as RAID 6.For [disktype] RAID 10, the default value is taken as 256KB. If you specify any other value the following warning is displayed:"Warning: We recommend a stripe unit size of 256KB for RAID 10"
"Warning: We recommend a stripe unit size of 256KB for RAID 10"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Do not add any suffixes like K, KB, M, etc. This is host specific data and can be added in the hosts section.For example:[stripesize] 128
[stripesize] 128
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - vgs
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the volume group names for the devices listed in [devices]. The number of volume groups in the [vgs] section should match the one in [devices]. If the volume group names are missing, the volume groups will be named as GLUSTER_vg{1, 2, 3, ...} as default.
For example:[vgs] CUSTOM_vg1 CUSTOM_vg2
[vgs] CUSTOM_vg1 CUSTOM_vg2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - pools
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the pool names for the volume groups specified in the [vgs] section. The number of pools listed in the [pools] section should match the number of volume groups in the [vgs] section. If the pool names are missing, the pools will be named as GLUSTER_pool{1, 2, 3, ...}.
For example:[pools] CUSTOM_pool1 CUSTOM_pool2
[pools] CUSTOM_pool1 CUSTOM_pool2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - lvs
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section provides the logical volume names for the volume groups specified in [vgs]. The number of logical volumes listed in the [lvs] section should match the number of volume groups listed in [vgs]. If the logical volume names are missing, it is named as GLUSTER_lv{1, 2, 3, ...}.
For example:[lvs] CUSTOM_lv1 CUSTOM_lv2
[lvs] CUSTOM_lv1 CUSTOM_lv2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - mountpoints
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the brick mount points for the logical volumes. The number of mount points should match the number of logical volumes specified in [lvs] If the mount points are missing, the mount points will be names as /gluster/brick{1, 2, 3…}.
For example:[mountpoints] /rhgs/brick1 /rhgs/brick2
[mountpoints] /rhgs/brick1 /rhgs/brick2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - brick_dirs
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This is the directory which will be used as a brick while creating the volume. A mount point cannot be used as a brick directory, hence brick_dir should be a directory inside the mount point.
This field can be left empty, in which case a directory will be created inside the mount point with a default name. If the backend is not setup, then this field will be ignored. In case mount points have to be used as brick directory, then use the force option in the volume section.Important
If you only want to create a volume and not setup the back-end, then provide the absolute path of brick directories for each host specified in the [hosts] section under this section along with the volume section.For example:[brick_dirs] /rhgs/brick1 /rhgs/brick2
[brick_dirs] /rhgs/brick1 /rhgs/brick2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - host-specific-data
This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. For the hosts (IP/hostname) listed under [hosts] section, each host can have its own specific data. The following are the variables that are supported for hosts.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - peer
This section specifies the configurations for the Trusted Storage Pool management (TSP). This section helps in making all the hosts specified in the [hosts] section to either probe each other to create the trusted storage pool or detach all of them from the trusted storage pool. The only option in this section is the option names 'action' which can have it's values to be either probe or detach.
For example:[peer] action=probe
[peer] action=probe
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - clients
This section specifies the client hosts and client_mount_points to mount the gluster storage volume created. The 'action' option is to be specified for the framework to determine the action that has to be performed. The options are 'mount' and 'unmount'. The Client hosts field is mandatory. If the mount points are not specified, default will be taken as /mnt/gluster for all the hosts.
The option fstype specifies how the gluster volume is to be mounted. Default is glusterfs (FUSE mount). The volume can also be mounted as NFS. Each client can have different types of volume mount, which has to be specified with a comma separated. The following fields are included:* action * hosts * fstype * client_mount_points
* action * hosts * fstype * client_mount_points
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - volume
The section specifies the configuration options for the volume. The following fields are included in this section:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - action
This option specifies what action must be performed in the volume. The choices can be [create, delete, add-brick, remove-brick].
create: This choice is used to create a volume.delete: If the delete choice is used, all the options other than 'volname' will be ignored.add-brick or remove-brick: If the add-brick or remove-brick is chosen, extra option bricks with a comma separated list of brick names(in the format <hostname>:<brick path> should be provided. In case of remove-brick, state option should also be provided specifying the state of the volume after brick removal. - volname
This option specifies the volume name. Default name is glustervol
Note
- In case of a volume operation, the 'hosts' section can be omitted, provided volname is in the format <hostname>:<volname>, where hostname is the hostname / IP of one of the nodes in the cluster
- Only single volume creation/deletion/configuration is supported.
- transport
This option specifies the transport type. Default is tcp. Options are tcp or rdma or tcp,rdma.
- replica
This option will specify if the volume should be of type replica. options are yes and no. Default is no. If 'replica' is provided as yes, the 'replica_count' should be provided.
- disperse
This option specifies if the volume should be of type disperse. Options are yes and no. Default is no.
- disperse_count
This field is optional even if 'disperse' is yes. If not specified, the number of bricks specified in the command line is taken as the disperse_count value.
- redundancy_count
If this value is not specified, and if 'disperse' is yes, it's default value is computed so that it generates an optimal configuration.
- force
This is an optional field and can be used during volume creation to forcefully create the volume.
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - backend-setup
Available in gdeploy 2.0. This section sets up the backend for using with GlusterFS volume. If more than one backend-setup has to be done, they can be done by numbering the section like [backend-setup1], [backend-setup2], ...
backend-setup section supports the following variables:- devices: This replaces the [pvs] section in gdeploy 1.x. devices variable lists the raw disks which should be used for backend setup. For example:
[backend-setup] devices=sda,sdb,sdc
[backend-setup] devices=sda,sdb,sdc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This is a mandatory field. - vgs: This is an optional variable. This variable replaces the [vgs] section in gdeploy 1.x. vgs variable lists the names to be used while creating volume groups. The number of VG names should match the number of devices or should be left blank. gdeploy will generate names for the VGs. For example:
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow A pattern can be provided for the vgs like custom_vg{1..3}, this will create three vgs.[backend-setup] devices=sda,sdb,sdc vgs=custom_vg{1..3}
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg{1..3}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - pools: This is an optional variable. The variable replaces the [pools] section in gdeploy 1.x. pools lists the thin pool names for the volume.
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Similar to vg, pattern can be provided for thin pool names. For example custom_pool{1..3} - lvs: This is an optional variable. This variable replaces the [lvs] section in gdeploy 1.x. lvs lists the logical volume name for the volume.
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3 lvs=custom_lv1,custom_lv2,custom_lv3
[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3 lvs=custom_lv1,custom_lv2,custom_lv3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Patterns for LV can be provided similar to vg. For example custom_lv{1..3}. - mountpoints: This variable deprecates the [mountpoints] section in gdeploy 1.x. Mountpoints lists the mount points where the logical volumes should be mounted. Number of mount points should be equal to the number of logical volumes. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - ssd - This variable is set if caching has to be added. For example, the backed setup with ssd for caching should be:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Specifying the name of the data LV is necessary while adding SSD. Make sure the datalv is created already. Otherwise ensure to create it in one of the earlier `backend-setup’ sections.
- PV
Available in gdeploy 2.0. If the user needs to have more control over setting up the backend, and does not want to use backend-setup section, then pv, vg, and lv modules are to be used. The pv module supports the following variables.
- action: Supports two values `create’ and `resize’
- devices: The list of devices to use for pv creation.
`action’ and `devices’ variables are mandatory. When `resize’ value is used for action then we have two more variables `expand’ and `shrink’ which can be set. Please see below for examples.Example 1: Creating a few physical volumes[pv] action=create devices=vdb,vdc,vdd
[pv] action=create devices=vdb,vdc,vdd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 2: Creating a few physical volumes on a host[pv:10.0.5.2] action=create devices=vdb,vdc,vdd
[pv:10.0.5.2] action=create devices=vdb,vdc,vdd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 3: Expanding an already created pv[pv] action=resize devices=vdb expand=yes
[pv] action=resize devices=vdb expand=yes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 4: Shrinking an already created pv[pv] action=resize devices=vdb shrink=100G
[pv] action=resize devices=vdb shrink=100G
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - VG
Available in gdeploy 2.0. This module is used to create and extend volume groups. The vg module supports the following variables.
- action - Action can be one of create or extend.
- pvname - PVs to use to create the volume. For more than one PV use comma separated values.
- vgname - The name of the vg. If no name is provided GLUSTER_vg will be used as default name.
- one-to-one - If set to yes, one-to-one mapping will be done between pv and vg.
If action is set to extend, the vg will be extended to include pv provided.Example1: Create a vg named images_vg with two PVs[vg] action=create vgname=images_vg pvname=sdb,sdc
[vg] action=create vgname=images_vg pvname=sdb,sdc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example2: Create two vgs named rhgs_vg1 and rhgs_vg2 with two PVs[vg] action=create vgname=rhgs_vg pvname=sdb,sdc one-to-one=yes
[vg] action=create vgname=rhgs_vg pvname=sdb,sdc one-to-one=yes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example3: Extend an existing vg with the given disk.[vg] action=extend vgname=rhgs_images pvname=sdc
[vg] action=extend vgname=rhgs_images pvname=sdc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - LV
Available in gdeploy 2.0. This module is used to create, setup-cache, and convert logical volumes. The lv module supports the following variables:
action - The action variable allows three values `create’, `setup-cache’, `convert’, and `change’. If the action is 'create', the following options are supported:- lvname: The name of the logical volume, this is an optional field. Default is GLUSTER_lv
- poolname - Name of the thinpool volume name, this is an optional field. Default is GLUSTER_pool
- lvtype - Type of the logical volume to be created, allowed values are `thin’ and `thick’. This is an optional field, default is thick.
- size - Size of the logical volume volume. Default is to take all available space on the vg.
- extent - Extent size, default is 100%FREE
- force - Force lv create, do not ask any questions. Allowed values `yes’, `no’. This is an optional field, default is yes.
- vgname - Name of the volume group to use.
- pvname - Name of the physical volume to use.
- chunksize - Size of chunk for snapshot.
- poolmetadatasize - Sets the size of pool's metadata logical volume.
- virtualsize - Creates a thinly provisioned device or a sparse device of the given size
- mkfs - Creates a filesystem of the given type. Default is to use xfs.
- mkfs-opts - mkfs options.
- mount - Mount the logical volume.
If the action is setup-cache, the below options are supported:- ssd - Name of the ssd device. For example sda/vda/ … to setup cache.
- vgname - Name of the volume group.
- poolname - Name of the pool.
- cache_meta_lv - Due to requirements from dm-cache (the kernel driver), LVM further splits the cache pool LV into two devices - the cache data LV and cache metadata LV. Provide the cache_meta_lv name here.
- cache_meta_lvsize - Size of the cache meta lv.
- cache_lv - Name of the cache data lv.
- cache_lvsize - Size of the cache data.
- force - Force
If the action is convert, the below options are supported:- lvtype - type of the lv, available options are thin and thick
- force - Force the lvconvert, default is yes.
- vgname - Name of the volume group.
- poolmetadata - Specifies cache or thin pool metadata logical volume.
- cachemode - Allowed values writeback, writethrough. Default is writethrough.
- cachepool - This argument is necessary when converting a logical volume to a cache LV. Name of the cachepool.
- lvname - Name of the logical volume.
- chunksize - Gives the size of chunk for snapshot, cache pool and thin pool logical volumes. Default unit is in kilobytes.
- poolmetadataspare - Controls creation and maintanence of pool metadata spare logical volume that will be used for automated pool recovery.
- thinpool - Specifies or converts logical volume into a thin pool's data volume. Volume’s name or path has to be given.
If the action is change, the below options are supported:- lvname - Name of the logical volume.
- vgname - Name of the volume group.
- zero - Set zeroing mode for thin pool.
Example 1: Create a thin LVCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 2: Create a thick LVCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there are more than one LVs, then the LVs can be created by numbering the LV sections, like [lv1], [lv2] … - RH-subscription
Available in gdeploy 2.0. This module is used to subscribe, unsubscribe, attach, enable repos etc. The RH-subscription module allows the following variables:
This module is used to subscribe, unsubscribe, attach, enable repos etc. The RH-subscription module allows the following variables:If the action is register, the following options are supported:- username/activationkey: Username or activationkey.
- password/activationkey: Password or activation key
- auto-attach: true/false
- pool: Name of the pool.
- repos: Repos to subscribe to.
- disable-repos: Repo names to disable. Leaving this option blank will disable all the repos.
- ignore_register_errors: If set to no, gdeploy will exit if system registration fails.
- If the action is attach-pool the following options are supported:pool - Pool name to be attached.ignore_attach_pool_errors - If set to no, gdeploy fails if attach-pool fails.
- If the action is enable-repos the following options are supported:repos - List of comma separated repos that are to be subscribed to.ignore_enable_errors - If set to no, gdeploy fails if enable-repos fail.
- If the action is disable-repos the following options are supported:repos - List of comma separated repos that are to be subscribed to.ignore_disable_errors - If set to no, gdeploy fails if disable-repos fail
- If the action is unregister the systems will be unregistered.ignore_unregister_errors - If set to no, gdeploy fails if unregistering fails.
Example 1: Subscribe to Red Hat Subscription network:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 2: Disable all the repos:[RH-subscription2] action=disable-repos repos=
[RH-subscription2] action=disable-repos repos=
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 3: Enable a few repos[RH-subscription3] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rhel-7-server-rhev-mgmt-agent-rpms ignore_enable_errors=no
[RH-subscription3] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rhel-7-server-rhev-mgmt-agent-rpms ignore_enable_errors=no
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - yum
Available in gdeploy 2.0. This module is used to install or remove rpm packages, with the yum module we can add repos as well during the install time.
The action variable allows two values `install’ and `remove’.If the action is install the following options are supported:- packages - Comma separated list of packages that are to be installed.
- repos - The repositories to be added.
- gpgcheck - yes/no values have to be provided.
- update - Whether yum update has to be initiated.
If the action is remove then only one option has to be provided:- remove - The comma separated list of packages to be removed.
For exampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install a package on a particular host.[yum2:host1] action=install gpgcheck=no packages=rhevm-appliance
[yum2:host1] action=install gpgcheck=no packages=rhevm-appliance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - shell
Available in gdeploy 2.0. This module allows user to run shell commands on the remote nodes.
Currently shell provides a single action variable with value execute. And a command variable with any valid shell command as value.The below command will execute vdsm-tool on all the nodes.[shell] action=execute command=vdsm-tool configure --force
[shell] action=execute command=vdsm-tool configure --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - update-file
Available in gdeploy 2.0. update-file module allows users to copy a file, edit a line in a file, or add new lines to a file. action variable can be any of copy, edit, or add.
When the action variable is set to copy, the following variables are supported.- src - The source path of the file to be copied from.
- dest - The destination path on the remote machine to where the file is to be copied to.
When the action variable is set to edit, the following variables are supported.- dest - The destination file name which has to be edited.
- replace - A regular expression, which will match a line that will be replaced.
- line - Text that has to be replaced.
When the action variable is set to add, the following variables are supported.- dest - File on the remote machine to which a line has to be added.
- line - Line which has to be added to the file. Line will be added towards the end of the file.
Example 1: Copy a file to a remote machine.[update-file] action=copy src=/tmp/foo.cfg dest=/etc/nagios/nrpe.cfg
[update-file] action=copy src=/tmp/foo.cfg dest=/etc/nagios/nrpe.cfg
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 2: Edit a line in the remote machine, in the below example lines that have allowed_hosts will be replaced with allowed_hosts=host.redhat.com[update-file] action=edit dest=/etc/nagios/nrpe.cfg replace=allowed_hosts line=allowed_hosts=host.redhat.com
[update-file] action=edit dest=/etc/nagios/nrpe.cfg replace=allowed_hosts line=allowed_hosts=host.redhat.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 3: Add a line to the end of a file[update-file] action=add dest=/etc/ntp.conf line=server clock.redhat.com iburst
[update-file] action=add dest=/etc/ntp.conf line=server clock.redhat.com iburst
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - service
Available in gdeploy 2.0. The service module allows user to start, stop, restart, reload, enable, or disable a service. The action variable specifies these values.
When action variable is set to any of start, stop, restart, reload, enable, disable the variable servicename specifies which service to start, stop etc.- service - Name of the service to start, stop etc.
Example: enable and start ntp daemon.[service1] action=enable service=ntpd
[service1] action=enable service=ntpd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow [service2] action=restart service=ntpd
[service2] action=restart service=ntpd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - script
Available in gdeploy 2.0. script module enables user to execute a script/binary on the remote machine. action variable is set to execute. Allows user to specify two variables file and args.
- file - An executable on the local machine.
- args - Arguments to the above program.
Example: Execute script disable-multipath.sh on all the remote nodes listed in `hosts’ section.[script] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
[script] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - firewalld
Available in gdeploy 2.0. firewalld module allows the user to manipulate firewall rules. action variable supports two values `add’ and `delete’. Both add and delete support the following variables:
- ports/services - The ports or services to add to firewall.
- permanent - Whether to make the entry permanent. Allowed values are true/false
- zone - Default zone is public
For example:[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp services=glusterfs
[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp services=glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.8. Deploying NFS Ganesha using gdeploy Copiar o linkLink copiado para a área de transferência!
5.1.8.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
You must subscribe to subscription manager and obtain the NFS Ganesha packages before continuing further .
[RH-subscription1] action=register username=<user>@redhat.com password=<password> pool=<pool-id>
[RH-subscription1]
action=register
username=<user>@redhat.com
password=<password>
pool=<pool-id>
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
To enable the required repos, add the following details in the configuration file:
[RH-subscription2] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rh-gluster-3-nfs-for-rhel-7-server-rpms,rhel-ha-for-rhel-7-server-rpms
[RH-subscription2]
action=enable-repos
repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rh-gluster-3-nfs-for-rhel-7-server-rpms,rhel-ha-for-rhel-7-server-rpms
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
To enable the firewall ports, add the following details in the configuration file:
[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp services=glusterfs,nlm,nfs,rpc-bind,high-availability,mountd,rquota
[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs,nlm,nfs,rpc-bind,high-availability,mountd,rquota
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
To install the required package, add the following details in the configuration file
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
5.1.8.2. Supported Actions Copiar o linkLink copiado para a área de transferência!
- Creating a Cluster
- Destroying a Cluster
- Adding a Node
- Exporting a Volume
- Unexporting a Volume
- Refreshing NFS Ganesha Configuration
This action creates a fresh NFS-Ganesha setup on a given volume. For this action the nfs-ganesha in the configuration file section supports the following variables:
- ha-name: This is an optional variable. By default it is ganesha-ha-360.
- cluster-nodes: This is a required argument. This variable expects comma separated values of cluster node names, which is used to form the cluster.
- vip: This is a required argument. This variable expects comma separated list of ip addresses. These will be the virtual ip addresses.
- volname: This is an optional variable if the configuration contains the [volume] section
gluster_use_execmem
boolean by executing the following command:
setsebool -P gluster_use_execmem on
# setsebool -P gluster_use_execmem on
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
The action, destroy-cluster cluster disables NFS Ganesha. It allows one variable, cluster-nodes
.
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
The add-node action allows three variables:
nodes
: Accepts a list of comma separated hostnames that have to be added to the clustervip
: Accepts a list of comma separated ip addresses.cluster_nodes
: Accepts a list of comma separated nodes of the NFS Ganesha cluster.
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
Note
This action exports a volume. export-volume action supports one variable, volname
.
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
This action unexports a volume. unexport-volume action supports one variable, volname
.
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
This action will add/delete or add a config block to the configuration file and runs refresh-config
on the cluster.
refresh-config
supports the following variables:
- del-config-lines
- block-name
- volname
- ha-conf-dir
Note
refresh-config
with client block has few limitations:
- Works for only one client
- If a client block already exists, then user has to manually delete it before doing any other modifications.
- User cannot delete a line from a config block
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
5.1.9. Deploying Samba / CTDB using gdeploy Copiar o linkLink copiado para a área de transferência!
5.1.9.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
You must subscribe to subscription manager and obtain the NFS Ganesha packages before continuing further .
[RH-subscription1] action=register username=<user>@redhat.com password=<password> pool=<pool-id>
[RH-subscription1]
action=register
username=<user>@redhat.com
password=<password>
pool=<pool-id>
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
To enable the required repos, add the following details in the configuration file:
[RH-subscription2] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rh-gluster-3-samba-for-rhel-7-server-rpms
[RH-subscription2]
action=enable-repos
repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rh-gluster-3-samba-for-rhel-7-server-rpms
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
To enable the firewall ports, add the following details in the configuration file:
[firewalld] action=add ports=54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,4379/tcp services=glusterfs,samba,high-availability
[firewalld]
action=add
ports=54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,4379/tcp
services=glusterfs,samba,high-availability
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
To install the required package, add the following details in the configuration file
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
5.1.9.2. Setting up Samba Copiar o linkLink copiado para a área de transferência!
- Enabling Samba on an existing volume
- Enabling Samba while creating a volume
If a Red Hat Gluster Storage volume is already present, then the user has to mention the action as smb-setup
in the volume section. It is necessary to mention all the hosts that are in the cluster, as gdeploy updates the glusterd configuration files on each of the hosts.
Note
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
If Samba has be set up while creating a volume, the a variable smb
has to be set to yes in the configuration file.
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
Note
smb_username
and smb_mountpoint
are necessary if samba has to be setup with the acls set correctly.
5.1.9.3. Setting up CTDB Copiar o linkLink copiado para a área de transferência!
- Setup CTDB on an existing volume
- Create a volume and setup CTDB
- Setup CTDB using separate ip addresses for CTDB cluster
To setup CTDB on an existing volume, the volume name of the volume has to be provided along with the action
as setup
.
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
For example, to set up CTDB while creating a volume, add the following details to the configuration file:
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
For example, to set up CTDB using separate ip addresses for CTDB cluster, add the following details to the configuration file:
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
5.1.10. Enabling SSL on a Volume Copiar o linkLink copiado para a área de transferência!
5.1.10.1. Creating a Volume and Enabling SSL Copiar o linkLink copiado para a área de transferência!
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>
5.1.10.2. Enabling SSL on an Existing Volume: Copiar o linkLink copiado para a área de transferência!
gdeploy -c <config_file_name>
# gdeploy -c <config_file_name>