Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 8. Configuring an active/active Samba server in a Red Hat High Availability cluster
The Red Hat High Availability Add-On provides support for configuring Samba in an active/active cluster configuration. In the following example, you are configuring an active/active Samba server on a two-node RHEL cluster.
For information about support policies for Samba, see Support Policies for RHEL High Availability - ctdb General Policies and Support Policies for RHEL Resilient Storage - Exporting gfs2 contents via other protocols on the Red Hat Customer Portal.
To configure Samba in an active/active cluster:
- Configure a GFS2 file system and its associated cluster resources.
- Configure Samba on the cluster nodes.
- Configure the Samba cluster resources.
- Test the Samba server you have configured.
8.1. Configuring a GFS2 file system for a Samba service in a high availability cluster Copier lienLien copié sur presse-papiers!
Before configuring an active/active Samba service in a Pacemaker cluster, configure a GFS2 file system for the cluster.
Prerequisites
- A two-node Red Hat High Availability cluster with fencing configured for each node
- Shared storage available for each cluster node
- A subscription to the AppStream channel and the Resilient Storage channel for each cluster node
For information about creating a Pacemaker cluster and configuring fencing for the cluster, see Creating a Red Hat High-Availability cluster with Pacemaker.
Procedure
On both nodes in the cluster, perform the following initial setup steps.
Enable the repository for Resilient Storage that corresponds to your system architecture. For example, to enable the Resilient Storage repository for an x86_64 system, enter the following
subscription-managercommand:# subscription-manager repos --enable=rhel-8-for-x86_64-resilientstorage-rpmsThe Resilient Storage repository is a superset of the High Availability repository. If you enable the Resilient Storage repository, you do not need to also enable the High Availability repository.
Install the
lvm2-lockd,gfs2-utils, anddlmpackages.# yum install lvm2-lockd gfs2-utils dlmSet the
use_lvmlockdconfiguration option in the/etc/lvm/lvm.conffile touse_lvmlockd=1.... use_lvmlockd = 1 ...
On one node in the cluster, set the global Pacemaker parameter
no-quorum-policytofreeze.NoteBy default, the value of
no-quorum-policyis set tostop, indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.To address this situation, set
no-quorum-policytofreezewhen GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.[root@z1 ~]# pcs property set no-quorum-policy=freezeSet up a
dlmresource. This is a required dependency for configuring a GFS2 file system in a cluster. This example creates thedlmresource as part of a resource group namedlocking. If you have not previously configured fencing for the cluster, this step fails and thepcs statuscommand displays a resource failure message.[root@z1 ~]# pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fenceClone the
lockingresource group so that the resource group can be active on both nodes of the cluster.[root@z1 ~]# pcs resource clone locking interleave=trueSet up an
lvmlockdresource as part of thelockingresource group.[root@z1 ~]# pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fenceCreate a physical volume and a shared volume group on the shared device
/dev/vdb. This example creates the shared volume groupcsmb_vg.[root@z1 ~]# pvcreate /dev/vdb [root@z1 ~]# vgcreate -Ay --shared csmb_vg /dev/vdb Volume group "csmb_vg" successfully created VG csmb_vg starting dlm lockspace Starting locking. Waiting until locks are ready- On the second node in the cluster:
(RHEL 8.5 and later) If you have enabled the use of a devices file by setting
use_devicesfile = 1in thelvm.conffile, add the shared device to the devices file on the second node in the cluster. By default, the use of a devices file is not enabled.[root@z2 ~]# lvmdevices --adddev /dev/vdbStart the lock manager for the shared volume group.
[root@z2 ~]# vgchange --lockstart csmb_vg VG csmb_vg starting dlm lockspace Starting locking. Waiting until locks are ready...
On one node in the cluster, create a logical volume and format the volume with a GFS2 file system that will be used exclusively by CTDB for internal locking. Only one such file system is required in a cluster even if your deployment exports multiple shares.
When specifying the lock table name with the
-toption of themkfs.gfs2command, ensure that the first component of the clustername:filesystemname you specify matches the name of your cluster. In this example, the cluster name ismy_cluster.[root@z1 ~]# lvcreate -L1G -n ctdb_lv csmb_vg [root@z1 ~]# mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:ctdb /dev/csmb_vg/ctdb_lvCreate a logical volume for each GFS2 file system that will be shared over Samba and format the volume with the GFS2 file system. This example creates a single GFS2 file system and Samba share, but you can create multiple file systems and shares.
[root@z1 ~]# lvcreate -L50G -n csmb_lv1 csmb_vg [root@z1 ~]# mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:csmb1 /dev/csmb_vg/csmb_lv1Set up
LVM_Activateresources to ensure that the required shared volumes are activated. This example creates theLVM_Activateresources as part of a resource groupshared_vg, and then clones that resource group so that it runs on all nodes in the cluster.Create the resources as disabled so they do not start automatically before you have configured the necessary order constraints.
[root@z1 ~]# pcs resource create --disabled --group shared_vg ctdb_lv ocf:heartbeat:LVM-activate lvname=ctdb_lv vgname=csmb_vg activation_mode=shared vg_access_mode=lvmlockd [root@z1 ~]# pcs resource create --disabled --group shared_vg csmb_lv1 ocf:heartbeat:LVM-activate lvname=csmb_lv1 vgname=csmb_vg activation_mode=shared vg_access_mode=lvmlockd [root@z1 ~]# pcs resource clone shared_vg interleave=trueConfigure an ordering constraint to start all members of the
lockingresource group before the members of theshared_vgresource group.[root@z1 ~]# pcs constraint order start locking-clone then shared_vg-clone Adding locking-clone shared_vg-clone (kind: Mandatory) (Options: first-action=start then-action=start)Enable the
LVM-activateresources.[root@z1 ~]# pcs resource enable ctdb_lv csmb_lv1On one node in the cluster, perform the following steps to create the
Filesystemresources you require.Create
Filesystemresources as cloned resources, using the GFS2 file systems you previously configured on your LVM volumes. This configures Pacemaker to mount and manage file systems.NoteYou should not add the file system to the
/etc/fstabfile because it will be managed as a Pacemaker cluster resource. You can specify mount options as part of the resource configuration withoptions=options. Run thepcs resource describe Filesystemcommand to display the full configuration options.[root@z1 ~]# pcs resource create ctdb_fs Filesystem device="/dev/csmb_vg/ctdb_lv" directory="/mnt/ctdb" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true [root@z1 ~]# pcs resource create csmb_fs1 Filesystem device="/dev/csmb_vg/csmb_lv1" directory="/srv/samba/share1" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=trueConfigure ordering constraints to ensure that Pacemaker mounts the file systems after the shared volume group
shared_vghas started.[root@z1 ~]# pcs constraint order start shared_vg-clone then ctdb_fs-clone Adding shared_vg-clone ctdb_fs-clone (kind: Mandatory) (Options: first-action=start then-action=start) [root@z1 ~]# pcs constraint order start shared_vg-clone then csmb_fs1-clone Adding shared_vg-clone csmb_fs1-clone (kind: Mandatory) (Options: first-action=start then-action=start)
8.2. Configuring Samba in a high availability cluster Copier lienLien copié sur presse-papiers!
To configure a Samba service in a Pacemaker cluster, configure the service on all nodes in the cluster.
Prerequisites
- A two-node Red Hat High Availability cluster configured with a GFS2 file system, as described in Configuring a GFS2 file system for a Samba service in a high availability cluster.
-
A public directory created on your GFS2 file system to use for the Samba share. In this example, the directory is
/srv/samba/share1. - Public virtual IP addresses that can be used to access the Samba share exported by this cluster.
Procedure
On both nodes in the cluster, configure the Samba service and set up a share definition:
Install the Samba and CTDB packages.
# dnf -y install samba ctdb cifs-utils samba-winbindEnsure that the
ctdb,smb,nmb, andwinbindservices are not running and do not start at boot.# systemctl disable --now ctdb smb nmb winbindIn the
/etc/samba/smb.conffile, configure the Samba service and set up the share definition, as in the following example for a standalone server with one share.[global] netbios name = linuxserver workgroup = WORKGROUP security = user clustering = yes [share1] path = /srv/samba/share1 read only = noVerify the
/etc/samba/smb.conffile.# testparm
On both nodes in the cluster, configure CTDB:
Create the
/etc/ctdb/nodesfile and add the IP addresses of the cluster nodes, as in this example nodes file.192.0.2.11 192.0.2.12Create the
/etc/ctdb/public_addressesfile and add the IP addresses and network device names of the cluster’s public interfaces to the file. When assigning IP addresses in thepublic_addressesfile, ensure that these addresses are not in use and that those addresses are routable from the intended client. The second field in each entry of the/etc/ctdb/public_addressesfile is the interface to use on the cluster machines for the corresponding public address. In this examplepublic_addressesfile, the interfaceenp1s0is used for all the public addresses.192.0.2.201/24 enp1s0 192.0.2.202/24 enp1s0The public interfaces of the cluster are the ones that clients use to access Samba from their network. For load balancing purposes, add an A record for each public IP address of the cluster to your DNS zone. Each of these records must resolve to the same hostname. Clients use the hostname to access Samba and DNS distributes the clients to the different nodes of the cluster.
If you are running the
firewalldservice, enable the ports that are required by thectdbandsambaservices.# firewall-cmd --add-service=ctdb --add-service=samba --permanent # firewall-cmd --reload
On one node in the cluster, update the SELinux contexts:
Update the SELinux contexts on the GFS2 share.
[root@z1 ~]# semanage fcontext -at ctdbd_var_run_t -s system_u "/mnt/ctdb(/.*)?" [root@z1 ~]# restorecon -Rv /mnt/ctdbUpdate the SELinux context on the directory shared in Samba.
[root@z1 ~]# semanage fcontext -at samba_share_t -s system_u "/srv/samba/share1(/.*)?" [root@z1 ~]# restorecon -Rv /srv/samba/share1
Additional resources
- For further information about configuring Samba as a standalone server, as in this example, see the Using Samba as a server chapter of Deploying different types of servers.
- Setting up a forward zone on a BIND primary server.
8.3. Configuring Samba cluster resources Copier lienLien copié sur presse-papiers!
After configuring a Samba service on both nodes of a two-node high availability cluster, configure the Samba cluster resources for the cluster.
Prerequisites
- A two-node Red Hat High Availability cluster configured with a GFS2 file system, as described in Configuring a GFS2 file system for a Samba service in a high availability cluster.
- Samba service configured on both cluster nodes, as described in Configuring Samba in a high availability cluster.
Procedure
On one node in the cluster, configure the Samba cluster resources:
Create the CTDB resource, in group
samba-group. The CTDB resource agent uses thectdb_*options specified with thepcscommand to create the CTDB configuration file. Create the resource as disabled so it does not start automatically before you have configured the necessary order constraints.[root@z1 ~]# pcs resource create --disabled ctdb --group samba-group ocf:heartbeat:CTDB ctdb_recovery_lock=/mnt/ctdb/ctdb.lock ctdb_dbdir=/var/lib/ctdb ctdb_logfile=/var/log/ctdb.log op monitor interval=10 timeout=30 op start timeout=90 op stop timeout=100Clone the
samba-groupresource group.[root@z1 ~]# pcs resource clone samba-groupCreate ordering constraints to ensure that all
Filesystemresources are running before the resources insamba-group.[root@z1 ~]# pcs constraint order start ctdb_fs-clone then samba-group-clone [root@z1 ~]# pcs constraint order start csmb_fs1-clone then samba-group-cloneCreate the
sambaresource in the resource groupsamba-group. This creates an implicit ordering constraint between CTDB and Samba, based on the order they are added.[root@z1 ~]# pcs resource create samba --group samba-group systemd:smbEnable the
ctdbandsambaresources.[root@z1 ~]# pcs resource enable ctdb sambaCheck that all the services have started successfully.
NoteIt can take a couple of minutes for CTDB to start Samba, export the shares, and stabilize. If you check the cluster status before this process has completed, you may see that the
sambaservices are not yet running.[root@z1 ~]# pcs status ... Full List of Resources: * fence-z1 (stonith:fence_xvm): Started z1.example.com * fence-z2 (stonith:fence_xvm): Started z2.example.com * Clone Set: locking-clone [locking]: * Started: [ z1.example.com z2.example.com ] * Clone Set: shared_vg-clone [shared_vg]: * Started: [ z1.example.com z2.example.com ] * Clone Set: ctdb_fs-clone [ctdb_fs]: * Started: [ z1.example.com z2.example.com ] * Clone Set: csmb_fs1-clone [csmb_fs1]: * Started: [ z1.example.com z2.example.com ] * Clone Set: samba-group-clone [samba-group]: * Started: [ z1.example.com z2.example.com ]
On both nodes in the cluster, add a local user for the test share directory.
Add the user.
# useradd -M -s /sbin/nologin example_userSet a password for the user.
# passwd example_userSet an SMB password for the user.
# smbpasswd -a example_user New SMB password: Retype new SMB password: Added user example_userActivate the user in the Samba database.
# smbpasswd -e example_userUpdate the file ownership and permissions on the GFS2 share for the Samba user.
# chown example_user:users /srv/samba/share1/ # chmod 755 /srv/samba/share1/
8.4. Verifying clustered Samba configuration Copier lienLien copié sur presse-papiers!
If your clustered Samba configuration was successful, you are able to mount the Samba share. After mounting the share, you can test for Samba recovery if the cluster node that is exporting the Samba share becomes unavailable.
Procedure
On a system that has access to one or more of the public IP addresses configured in the
/etc/ctdb/public_addressesfile on the cluster nodes, mount the Samba share using one of these public IP addresses.[root@testmount ~]# mkdir /mnt/sambashare [root@testmount ~]# mount -t cifs -o user=example_user //192.0.2.201/share1 /mnt/sambashare Password for example_user@//192.0.2.201/public: XXXXXXXVerify that the file system is mounted.
[root@testmount ~]# mount | grep /mnt/sambashare //192.0.2.201/public on /mnt/sambashare type cifs (rw,relatime,vers=1.0,cache=strict,username=example_user,domain=LINUXSERVER,uid=0,noforceuid,gid=0,noforcegid,addr=192.0.2.201,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=65536,echo_interval=60,actimeo=1,user=example_user)Verify that you can create a file on the mounted file system.
[root@testmount ~]# touch /mnt/sambashare/testfile1 [root@testmount ~]# ls /mnt/sambashare testfile1Determine which cluster node is exporting the Samba share:
On each cluster node, display the IP addresses assigned to the interface specified in the
public_addressesfile. The following commands display the IPv4 addresses assigned to theenp1s0interface on each node.[root@z1 ~]# ip -4 addr show enp1s0 | grep inet inet 192.0.2.11/24 brd 192.0.2.255 scope global dynamic noprefixroute enp1s0 inet 192.0.2.201/24 brd 192.0.2.255 scope global secondary enp1s0 [root@z2 ~]# ip -4 addr show enp1s0 | grep inet inet 192.0.2.12/24 brd 192.0.2.255 scope global dynamic noprefixroute enp1s0 inet 192.0.2.202/24 brd 192.0.2.255 scope global secondary enp1s0In the output of the
ipcommand, find the node with the IP address you specified with themountcommand when you mounted the share.In this example, the IP address specified in the mount command is 192.0.2.201. The output of the
ipcommand shows that the IP address 192.0.2.201 is assigned toz1.example.com.
Put the node exporting the Samba share in
standbymode, which will cause the node to be unable to host any cluster resources.[root@z1 ~]# pcs node standby z1.example.comFrom the system on which you mounted the file system, verify that you can still create a file on the file system.
[root@testmount ~]# touch /mnt/sambashare/testfile2 [root@testmount ~]# ls /mnt/sambashare testfile1 testfile2Delete the files you have created to verify that the file system has successfully mounted. If you no longer require the file system to be mounted, unmount it at this point.
[root@testmount ~]# rm /mnt/sambashare/testfile1 /mnt/sambashare/testfile2 rm: remove regular empty file '/mnt/sambashare/testfile1'? y rm: remove regular empty file '/mnt/sambashare/testfile1'? y [root@testmount ~]# umount /mnt/sambashareFrom one of the cluster nodes, restore cluster services to the node that you previously put into standby mode. This will not necessarily move the service back to that node.
[root@z1 ~]# pcs node unstandby z1.example.com