Chapter 8. Configuring an active/active Samba server in a Red Hat High Availability cluster
The Red Hat High Availability Add-On provides support for configuring Samba in an active/active cluster configuration. In the following example, you are configuring an active/active Samba server on a two-node RHEL cluster.
For information about support policies for Samba, see Support Policies for RHEL High Availability - ctdb General Policies and Support Policies for RHEL Resilient Storage - Exporting gfs2 contents via other protocols on the Red Hat Customer Portal.
To configure Samba in an active/active cluster:
- Configure a GFS2 file system and its associated cluster resources.
- Configure Samba on the cluster nodes.
- Configure the Samba cluster resources.
- Test the Samba server you have configured.
8.1. Configuring a GFS2 file system for a Samba service in a high availability cluster
Before configuring an active/active Samba service in a Pacemaker cluster, configure a GFS2 file system for the cluster.
Prerequisites
- A two-node Red Hat High Availability cluster with fencing configured for each node
- Shared storage available for each cluster node
- A subscription to the AppStream channel and the Resilient Storage channel for each cluster node
For information about creating a Pacemaker cluster and configuring fencing for the cluster, see Creating a Red Hat High-Availability cluster with Pacemaker.
Procedure
On both nodes in the cluster, perform the following initial setup steps.
Enable the repository for Resilient Storage that corresponds to your system architecture. For example, to enable the Resilient Storage repository for an x86_64 system, enter the following
subscription-manager
command:# subscription-manager repos --enable=rhel-9-for-x86_64-resilientstorage-rpms
The Resilient Storage repository is a superset of the High Availability repository. If you enable the Resilient Storage repository, you do not need to also enable the High Availability repository.
Install the
lvm2-lockd
,gfs2-utils
, anddlm
packages.# yum install lvm2-lockd gfs2-utils dlm
Set the
use_lvmlockd
configuration option in the/etc/lvm/lvm.conf
file touse_lvmlockd=1
.... use_lvmlockd = 1 ...
On one node in the cluster, set the global Pacemaker parameter
no-quorum-policy
tofreeze
.NoteBy default, the value of
no-quorum-policy
is set tostop
, indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.To address this situation, set
no-quorum-policy
tofreeze
when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.[root@z1 ~]# pcs property set no-quorum-policy=freeze
Set up a
dlm
resource. This is a required dependency for configuring a GFS2 file system in a cluster. This example creates thedlm
resource as part of a resource group namedlocking
. If you have not previously configured fencing for the cluster, this step fails and thepcs status
command displays a resource failure message.[root@z1 ~]# pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence
Clone the
locking
resource group so that the resource group can be active on both nodes of the cluster.[root@z1 ~]# pcs resource clone locking interleave=true
Set up an
lvmlockd
resource as part of thelocking
resource group.[root@z1 ~]# pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence
Create a physical volume and a shared volume group on the shared device
/dev/vdb
. This example creates the shared volume groupcsmb_vg
.[root@z1 ~]# pvcreate /dev/vdb [root@z1 ~]# vgcreate -Ay --shared csmb_vg /dev/vdb Volume group "csmb_vg" successfully created VG csmb_vg starting dlm lockspace Starting locking. Waiting until locks are ready
- On the second node in the cluster:
If the use of a devices file is enabled with the
use_devicesfile = 1
parameter in thelvm.conf
file, add the shared device to the devices file on the second node in the cluster. This feature is enabled by default.[root@z2 ~]# lvmdevices --adddev /dev/vdb
Start the lock manager for the shared volume group.
[root@z2 ~]# vgchange --lockstart csmb_vg VG csmb_vg starting dlm lockspace Starting locking. Waiting until locks are ready...
On one node in the cluster, create a logical volume and format the volume with a GFS2 file system that will be used exclusively by CTDB for internal locking. Only one such file system is required in a cluster even if your deployment exports multiple shares.
When specifying the lock table name with the
-t
option of themkfs.gfs2
command, ensure that the first component of the clustername:filesystemname you specify matches the name of your cluster. In this example, the cluster name ismy_cluster
.[root@z1 ~]# lvcreate -L1G -n ctdb_lv csmb_vg [root@z1 ~]# mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:ctdb /dev/csmb_vg/ctdb_lv
Create a logical volume for each GFS2 file system that will be shared over Samba and format the volume with the GFS2 file system. This example creates a single GFS2 file system and Samba share, but you can create multiple file systems and shares.
[root@z1 ~]# lvcreate -L50G -n csmb_lv1 csmb_vg [root@z1 ~]# mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:csmb1 /dev/csmb_vg/csmb_lv1
Set up
LVM_Activate
resources to ensure that the required shared volumes are activated. This example creates theLVM_Activate
resources as part of a resource groupshared_vg
, and then clones that resource group so that it runs on all nodes in the cluster.Create the resources as disabled so they do not start automatically before you have configured the necessary order constraints.
[root@z1 ~]# pcs resource create --disabled --group shared_vg ctdb_lv ocf:heartbeat:LVM-activate lvname=ctdb_lv vgname=csmb_vg activation_mode=shared vg_access_mode=lvmlockd [root@z1 ~]# pcs resource create --disabled --group shared_vg csmb_lv1 ocf:heartbeat:LVM-activate lvname=csmb_lv1 vgname=csmb_vg activation_mode=shared vg_access_mode=lvmlockd [root@z1 ~]# pcs resource clone shared_vg interleave=true
Configure an ordering constraint to start all members of the
locking
resource group before the members of theshared_vg
resource group.[root@z1 ~]# pcs constraint order start locking-clone then shared_vg-clone Adding locking-clone shared_vg-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Enable the
LVM-activate
resources.[root@z1 ~]# pcs resource enable ctdb_lv csmb_lv1
On one node in the cluster, perform the following steps to create the
Filesystem
resources you require.Create
Filesystem
resources as cloned resources, using the GFS2 file systems you previously configured on your LVM volumes. This configures Pacemaker to mount and manage file systems.NoteYou should not add the file system to the
/etc/fstab
file because it will be managed as a Pacemaker cluster resource. You can specify mount options as part of the resource configuration withoptions=options
. Run thepcs resource describe Filesystem
command to display the full configuration options.[root@z1 ~]# pcs resource create ctdb_fs Filesystem device="/dev/csmb_vg/ctdb_lv" directory="/mnt/ctdb" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true [root@z1 ~]# pcs resource create csmb_fs1 Filesystem device="/dev/csmb_vg/csmb_lv1" directory="/srv/samba/share1" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true
Configure ordering constraints to ensure that Pacemaker mounts the file systems after the shared volume group
shared_vg
has started.[root@z1 ~]# pcs constraint order start shared_vg-clone then ctdb_fs-clone Adding shared_vg-clone ctdb_fs-clone (kind: Mandatory) (Options: first-action=start then-action=start) [root@z1 ~]# pcs constraint order start shared_vg-clone then csmb_fs1-clone Adding shared_vg-clone csmb_fs1-clone (kind: Mandatory) (Options: first-action=start then-action=start)
8.2. Configuring Samba in a high availability cluster
To configure a Samba service in a Pacemaker cluster, configure the service on all nodes in the cluster.
Prerequisites
- A two-node Red Hat High Availability cluster configured with a GFS2 file system, as described in Configuring a GFS2 file system for a Samba service in a high availability cluster.
-
A public directory created on your GFS2 file system to use for the Samba share. In this example, the directory is
/srv/samba/share1
. - Public virtual IP addresses that can be used to access the Samba share exported by this cluster.
Procedure
On both nodes in the cluster, configure the Samba service and set up a share definition:
Install the Samba and CTDB packages.
# dnf -y install samba ctdb cifs-utils samba-winbind
Ensure that the
ctdb
,smb
,nmb
, andwinbind
services are not running and do not start at boot.# systemctl disable --now ctdb smb nmb winbind
In the
/etc/samba/smb.conf
file, configure the Samba service and set up the share definition, as in the following example for a standalone server with one share.[global] netbios name = linuxserver workgroup = WORKGROUP security = user clustering = yes [share1] path = /srv/samba/share1 read only = no
Verify the
/etc/samba/smb.conf
file.# testparm
On both nodes in the cluster, configure CTDB:
Create the
/etc/ctdb/nodes
file and add the IP addresses of the cluster nodes, as in this example nodes file.192.0.2.11 192.0.2.12
Create the
/etc/ctdb/public_addresses
file and add the IP addresses and network device names of the cluster’s public interfaces to the file. When assigning IP addresses in thepublic_addresses
file, ensure that these addresses are not in use and that those addresses are routable from the intended client. The second field in each entry of the/etc/ctdb/public_addresses
file is the interface to use on the cluster machines for the corresponding public address. In this examplepublic_addresses
file, the interfaceenp1s0
is used for all the public addresses.192.0.2.201/24 enp1s0 192.0.2.202/24 enp1s0
The public interfaces of the cluster are the ones that clients use to access Samba from their network. For load balancing purposes, add an A record for each public IP address of the cluster to your DNS zone. Each of these records must resolve to the same hostname. Clients use the hostname to access Samba and DNS distributes the clients to the different nodes of the cluster.
If you are running the
firewalld
service, enable the ports that are required by thectdb
andsamba
services.# firewall-cmd --add-service=ctdb --add-service=samba --permanent # firewall-cmd --reload
On one node in the cluster, update the SELinux contexts:
Update the SELinux contexts on the GFS2 share.
[root@z1 ~]# semanage fcontext -at ctdbd_var_run_t -s system_u "/mnt/ctdb(/.)?" [root@z1 ~]# restorecon -Rv /mnt/ctdb
Update the SELinux context on the directory shared in Samba.
[root@z1 ~]# semanage fcontext -at samba_share_t -s system_u "/srv/samba/share1(/.)?" [root@z1 ~]# restorecon -Rv /srv/samba/share1
Additional resources
- For further information about configuring Samba as a standalone server, as in this example, see the Using Samba as a server chapter of Configuring and using network file services.
- Setting up a forward zone on a BIND primary server.
8.3. Configuring Samba cluster resources
After configuring a Samba service on both nodes of a two-node high availability cluster, configure the Samba cluster resources for the cluster.
Prerequisites
- A two-node Red Hat High Availability cluster configured with a GFS2 file system, as described in Configuring a GFS2 file system for a Samba service in a high availability cluster.
- Samba service configured on both cluster nodes, as described in Configuring Samba in a high availability cluster.
Procedure
On one node in the cluster, configure the Samba cluster resources:
Create the CTDB resource, in group
samba-group
. The CTDB resource agent uses thectdb_*
options specified with thepcs
command to create the CTDB configuration file. Create the resource as disabled so it does not start automatically before you have configured the necessary order constraints.[root@z1 ~]# pcs resource create --disabled ctdb --group samba-group ocf:heartbeat:CTDB ctdb_recovery_lock=/mnt/ctdb/ctdb.lock ctdb_dbdir=/var/lib/ctdb ctdb_logfile=/var/log/ctdb.log op monitor interval=10 timeout=30 op start timeout=90 op stop timeout=100
Clone the
samba-group
resource group.[root@z1 ~]# pcs resource clone samba-group
Create ordering constraints to ensure that all
Filesystem
resources are running before the resources insamba-group
.[root@z1 ~]# pcs constraint order start ctdb_fs-clone then samba-group-clone [root@z1 ~]# pcs constraint order start csmb_fs1-clone then samba-group-clone
Create the
samba
resource in the resource groupsamba-group
. This creates an implicit ordering constraint between CTDB and Samba, based on the order they are added.[root@z1 ~]# pcs resource create samba --group samba-group systemd:smb
Enable the
ctdb
andsamba
resources.[root@z1 ~]# pcs resource enable ctdb samba
Check that all the services have started successfully.
NoteIt can take a couple of minutes for CTDB to start Samba, export the shares, and stabilize. If you check the cluster status before this process has completed, you may see that the
samba
services are not yet running.[root@z1 ~]# pcs status ... Full List of Resources: * fence-z1 (stonith:fence_xvm): Started z1.example.com * fence-z2 (stonith:fence_xvm): Started z2.example.com * Clone Set: locking-clone [locking]: * Started: [ z1.example.com z2.example.com ] * Clone Set: shared_vg-clone [shared_vg]: * Started: [ z1.example.com z2.example.com ] * Clone Set: ctdb_fs-clone [ctdb_fs]: * Started: [ z1.example.com z2.example.com ] * Clone Set: csmb_fs1-clone [csmb_fs1]: * Started: [ z1.example.com z2.example.com ] * Clone Set: samba-group-clone [samba-group]: * Started: [ z1.example.com z2.example.com ]
On both nodes in the cluster, add a local user for the test share directory.
Add the user.
# useradd -M -s /sbin/nologin example_user
Set a password for the user.
# passwd example_user
Set an SMB password for the user.
# smbpasswd -a example_user New SMB password: Retype new SMB password: Added user example_user
Activate the user in the Samba database.
# smbpasswd -e example_user
Update the file ownership and permissions on the GFS2 share for the Samba user.
# chown example_user:users /srv/samba/share1/ # chmod 755 /srv/samba/share1/
8.4. Verifying clustered Samba configuration
If your clustered Samba configuration was successful, you are able to mount the Samba share. After mounting the share, you can test for Samba recovery if the cluster node that is exporting the Samba share becomes unavailable.
Procedure
On a system that has access to one or more of the public IP addresses configured in the
/etc/ctdb/public_addresses
file on the cluster nodes, mount the Samba share using one of these public IP addresses.[root@testmount ~]# mkdir /mnt/sambashare [root@testmount ~]# mount -t cifs -o user=example_user //192.0.2.201/share1 /mnt/sambashare Password for example_user@//192.0.2.201/public: XXXXXXX
Verify that the file system is mounted.
[root@testmount ~]# mount | grep /mnt/sambashare //192.0.2.201/public on /mnt/sambashare type cifs (rw,relatime,vers=1.0,cache=strict,username=example_user,domain=LINUXSERVER,uid=0,noforceuid,gid=0,noforcegid,addr=192.0.2.201,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=65536,echo_interval=60,actimeo=1,user=example_user)
Verify that you can create a file on the mounted file system.
[root@testmount ~]# touch /mnt/sambashare/testfile1 [root@testmount ~]# ls /mnt/sambashare testfile1
Determine which cluster node is exporting the Samba share:
On each cluster node, display the IP addresses assigned to the interface specified in the
public_addresses
file. The following commands display the IPv4 addresses assigned to theenp1s0
interface on each node.[root@z1 ~]# ip -4 addr show enp1s0 | grep inet inet 192.0.2.11/24 brd 192.0.2.255 scope global dynamic noprefixroute enp1s0 inet 192.0.2.201/24 brd 192.0.2.255 scope global secondary enp1s0 [root@z2 ~]# ip -4 addr show enp1s0 | grep inet inet 192.0.2.12/24 brd 192.0.2.255 scope global dynamic noprefixroute enp1s0 inet 192.0.2.202/24 brd 192.0.2.255 scope global secondary enp1s0
In the output of the
ip
command, find the node with the IP address you specified with themount
command when you mounted the share.In this example, the IP address specified in the mount command is 192.0.2.201. The output of the
ip
command shows that the IP address 192.0.2.201 is assigned toz1.example.com
.
Put the node exporting the Samba share in
standby
mode, which will cause the node to be unable to host any cluster resources.[root@z1 ~]# pcs node standby z1.example.com
From the system on which you mounted the file system, verify that you can still create a file on the file system.
[root@testmount ~]# touch /mnt/sambashare/testfile2 [root@testmount ~]# ls /mnt/sambashare testfile1 testfile2
Delete the files you have created to verify that the file system has successfully mounted. If you no longer require the file system to be mounted, unmount it at this point.
[root@testmount ~]# rm /mnt/sambashare/testfile1 /mnt/sambashare/testfile2 rm: remove regular empty file '/mnt/sambashare/testfile1'? y rm: remove regular empty file '/mnt/sambashare/testfile1'? y [root@testmount ~]# umount /mnt/sambashare
From one of the cluster nodes, restore cluster services to the node that you previously put into standby mode. This will not necessarily move the service back to that node.
[root@z1 ~]# pcs node unstandby z1.example.com