Search

Chapter 8. Configuring an active/active Samba server in a Red Hat High Availability cluster

download PDF

The Red Hat High Availability Add-On provides support for configuring Samba in an active/active cluster configuration. In the following example, you are configuring an active/active Samba server on a two-node RHEL cluster.

For information about support policies for Samba, see Support Policies for RHEL High Availability - ctdb General Policies and Support Policies for RHEL Resilient Storage - Exporting gfs2 contents via other protocols on the Red Hat Customer Portal.

To configure Samba in an active/active cluster:

  1. Configure a GFS2 file system and its associated cluster resources.
  2. Configure Samba on the cluster nodes.
  3. Configure the Samba cluster resources.
  4. Test the Samba server you have configured.

8.1. Configuring a GFS2 file system for a Samba service in a high availability cluster

Before configuring an active/active Samba service in a Pacemaker cluster, configure a GFS2 file system for the cluster.

Prerequisites

  • A two-node Red Hat High Availability cluster with fencing configured for each node
  • Shared storage available for each cluster node
  • A subscription to the AppStream channel and the Resilient Storage channel for each cluster node

For information about creating a Pacemaker cluster and configuring fencing for the cluster, see Creating a Red Hat High-Availability cluster with Pacemaker.

Procedure

  1. On both nodes in the cluster, perform the following initial setup steps.

    1. Enable the repository for Resilient Storage that corresponds to your system architecture. For example, to enable the Resilient Storage repository for an x86_64 system, enter the following subscription-manager command:

      # subscription-manager repos --enable=rhel-9-for-x86_64-resilientstorage-rpms

      The Resilient Storage repository is a superset of the High Availability repository. If you enable the Resilient Storage repository, you do not need to also enable the High Availability repository.

    2. Install the lvm2-lockd, gfs2-utils, and dlm packages.

      # yum install lvm2-lockd gfs2-utils dlm
    3. Set the use_lvmlockd configuration option in the /etc/lvm/lvm.conf file to use_lvmlockd=1.

      ...
      
      use_lvmlockd = 1
      
      ...
  2. On one node in the cluster, set the global Pacemaker parameter no-quorum-policy to freeze.

    Note

    By default, the value of no-quorum-policy is set to stop, indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.

    To address this situation, set no-quorum-policy to freeze when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.

    [root@z1 ~]# pcs property set no-quorum-policy=freeze
  3. Set up a dlm resource. This is a required dependency for configuring a GFS2 file system in a cluster. This example creates the dlm resource as part of a resource group named locking. If you have not previously configured fencing for the cluster, this step fails and the pcs status command displays a resource failure message.

    [root@z1 ~]# pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence
  4. Clone the locking resource group so that the resource group can be active on both nodes of the cluster.

    [root@z1 ~]# pcs resource clone locking interleave=true
  5. Set up an lvmlockd resource as part of the locking resource group.

    [root@z1 ~]# pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence
  6. Create a physical volume and a shared volume group on the shared device /dev/vdb. This example creates the shared volume group csmb_vg.

    [root@z1 ~]# pvcreate /dev/vdb
    [root@z1 ~]# vgcreate -Ay --shared csmb_vg /dev/vdb
    Volume group "csmb_vg" successfully created
    VG csmb_vg starting dlm lockspace
    Starting locking.  Waiting until locks are ready
  7. On the second node in the cluster:
  8. If the use of a devices file is enabled with the use_devicesfile = 1 parameter in the lvm.conf file, add the shared device to the devices file on the second node in the cluster. This feature is enabled by default.

    [root@z2 ~]# lvmdevices --adddev /dev/vdb
    1. Start the lock manager for the shared volume group.

      [root@z2 ~]# vgchange --lockstart csmb_vg
        VG csmb_vg starting dlm lockspace
        Starting locking.  Waiting until locks are ready...
  9. On one node in the cluster, create a logical volume and format the volume with a GFS2 file system that will be used exclusively by CTDB for internal locking. Only one such file system is required in a cluster even if your deployment exports multiple shares.

    When specifying the lock table name with the -t option of the mkfs.gfs2 command, ensure that the first component of the clustername:filesystemname you specify matches the name of your cluster. In this example, the cluster name is my_cluster.

    [root@z1 ~]# lvcreate -L1G -n ctdb_lv csmb_vg
    [root@z1 ~]# mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:ctdb /dev/csmb_vg/ctdb_lv
  10. Create a logical volume for each GFS2 file system that will be shared over Samba and format the volume with the GFS2 file system. This example creates a single GFS2 file system and Samba share, but you can create multiple file systems and shares.

    [root@z1 ~]# lvcreate -L50G -n csmb_lv1 csmb_vg
    [root@z1 ~]# mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:csmb1 /dev/csmb_vg/csmb_lv1
  11. Set up LVM_Activate resources to ensure that the required shared volumes are activated. This example creates the LVM_Activate resources as part of a resource group shared_vg, and then clones that resource group so that it runs on all nodes in the cluster.

    Create the resources as disabled so they do not start automatically before you have configured the necessary order constraints.

    [root@z1 ~]# pcs resource create --disabled --group shared_vg ctdb_lv ocf:heartbeat:LVM-activate lvname=ctdb_lv vgname=csmb_vg activation_mode=shared vg_access_mode=lvmlockd
    [root@z1 ~]# pcs resource create --disabled --group shared_vg csmb_lv1 ocf:heartbeat:LVM-activate lvname=csmb_lv1 vgname=csmb_vg activation_mode=shared vg_access_mode=lvmlockd
    [root@z1 ~]# pcs resource clone shared_vg interleave=true
  12. Configure an ordering constraint to start all members of the locking resource group before the members of the shared_vg resource group.

    [root@z1 ~]# pcs constraint order start locking-clone then shared_vg-clone
    Adding locking-clone shared_vg-clone (kind: Mandatory) (Options: first-action=start then-action=start)
  13. Enable the LVM-activate resources.

    [root@z1 ~]# pcs resource enable ctdb_lv csmb_lv1
  14. On one node in the cluster, perform the following steps to create the Filesystem resources you require.

    1. Create Filesystem resources as cloned resources, using the GFS2 file systems you previously configured on your LVM volumes. This configures Pacemaker to mount and manage file systems.

      Note

      You should not add the file system to the /etc/fstab file because it will be managed as a Pacemaker cluster resource. You can specify mount options as part of the resource configuration with options=options. Run the pcs resource describe Filesystem command to display the full configuration options.

      [root@z1 ~]# pcs resource create ctdb_fs Filesystem device="/dev/csmb_vg/ctdb_lv" directory="/mnt/ctdb" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true
      [root@z1 ~]# pcs resource create csmb_fs1 Filesystem device="/dev/csmb_vg/csmb_lv1" directory="/srv/samba/share1" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true
    2. Configure ordering constraints to ensure that Pacemaker mounts the file systems after the shared volume group shared_vg has started.

      [root@z1 ~]# pcs constraint order start shared_vg-clone then ctdb_fs-clone
      Adding shared_vg-clone ctdb_fs-clone (kind: Mandatory) (Options: first-action=start then-action=start)
      [root@z1 ~]# pcs constraint order start shared_vg-clone then csmb_fs1-clone
      Adding shared_vg-clone csmb_fs1-clone (kind: Mandatory) (Options: first-action=start then-action=start)

8.2. Configuring Samba in a high availability cluster

To configure a Samba service in a Pacemaker cluster, configure the service on all nodes in the cluster.

Prerequisites

  • A two-node Red Hat High Availability cluster configured with a GFS2 file system, as described in Configuring a GFS2 file system for a Samba service in a high availability cluster.
  • A public directory created on your GFS2 file system to use for the Samba share. In this example, the directory is /srv/samba/share1.
  • Public virtual IP addresses that can be used to access the Samba share exported by this cluster.

Procedure

  1. On both nodes in the cluster, configure the Samba service and set up a share definition:

    1. Install the Samba and CTDB packages.

      # dnf -y install samba ctdb cifs-utils samba-winbind
    2. Ensure that the ctdb, smb, nmb, and winbind services are not running and do not start at boot.

      # systemctl disable --now ctdb smb nmb winbind
    3. In the /etc/samba/smb.conf file, configure the Samba service and set up the share definition, as in the following example for a standalone server with one share.

      [global]
          netbios name = linuxserver
          workgroup = WORKGROUP
          security = user
          clustering = yes
      [share1]
          path = /srv/samba/share1
          read only = no
    4. Verify the /etc/samba/smb.conf file.

      # testparm
  2. On both nodes in the cluster, configure CTDB:

    1. Create the /etc/ctdb/nodes file and add the IP addresses of the cluster nodes, as in this example nodes file.

      192.0.2.11
      192.0.2.12
    2. Create the /etc/ctdb/public_addresses file and add the IP addresses and network device names of the cluster’s public interfaces to the file. When assigning IP addresses in the public_addresses file, ensure that these addresses are not in use and that those addresses are routable from the intended client. The second field in each entry of the /etc/ctdb/public_addresses file is the interface to use on the cluster machines for the corresponding public address. In this example public_addresses file, the interface enp1s0 is used for all the public addresses.

      192.0.2.201/24 enp1s0
      192.0.2.202/24 enp1s0

      The public interfaces of the cluster are the ones that clients use to access Samba from their network. For load balancing purposes, add an A record for each public IP address of the cluster to your DNS zone. Each of these records must resolve to the same hostname. Clients use the hostname to access Samba and DNS distributes the clients to the different nodes of the cluster.

    3. If you are running the firewalld service, enable the ports that are required by the ctdb and samba services.

      # firewall-cmd --add-service=ctdb --add-service=samba --permanent
      # firewall-cmd --reload
  3. On one node in the cluster, update the SELinux contexts:

    1. Update the SELinux contexts on the GFS2 share.

      [root@z1 ~]# semanage fcontext -at ctdbd_var_run_t -s system_u "/mnt/ctdb(/.)?"
      [root@z1 ~]# restorecon -Rv /mnt/ctdb
    2. Update the SELinux context on the directory shared in Samba.

      [root@z1 ~]# semanage fcontext -at samba_share_t -s system_u "/srv/samba/share1(/.)?"
      [root@z1 ~]# restorecon -Rv /srv/samba/share1

Additional resources

8.3. Configuring Samba cluster resources

After configuring a Samba service on both nodes of a two-node high availability cluster, configure the Samba cluster resources for the cluster.

Prerequisites

Procedure

  1. On one node in the cluster, configure the Samba cluster resources:

    1. Create the CTDB resource, in group samba-group. The CTDB resource agent uses the ctdb_* options specified with the pcs command to create the CTDB configuration file. Create the resource as disabled so it does not start automatically before you have configured the necessary order constraints.

      [root@z1 ~]# pcs resource create --disabled ctdb --group samba-group ocf:heartbeat:CTDB ctdb_recovery_lock=/mnt/ctdb/ctdb.lock ctdb_dbdir=/var/lib/ctdb ctdb_logfile=/var/log/ctdb.log op monitor interval=10 timeout=30 op start timeout=90 op stop timeout=100
    2. Clone the samba-group resource group.

      [root@z1 ~]# pcs resource clone samba-group
    3. Create ordering constraints to ensure that all Filesystem resources are running before the resources in samba-group.

      [root@z1 ~]# pcs constraint order start ctdb_fs-clone then samba-group-clone
      [root@z1 ~]# pcs constraint order start csmb_fs1-clone then samba-group-clone
    4. Create the samba resource in the resource group samba-group. This creates an implicit ordering constraint between CTDB and Samba, based on the order they are added.

      [root@z1 ~]# pcs resource create samba --group samba-group systemd:smb
    5. Enable the ctdb and samba resources.

      [root@z1 ~]# pcs resource enable ctdb samba
    6. Check that all the services have started successfully.

      Note

      It can take a couple of minutes for CTDB to start Samba, export the shares, and stabilize. If you check the cluster status before this process has completed, you may see that the samba services are not yet running.

      [root@z1 ~]# pcs status
      
      ...
      
      Full List of Resources:
        * fence-z1   (stonith:fence_xvm): Started z1.example.com
        * fence-z2   (stonith:fence_xvm): Started z2.example.com
        * Clone Set: locking-clone [locking]:
      	* Started: [ z1.example.com z2.example.com ]
        * Clone Set: shared_vg-clone [shared_vg]:
      	* Started: [ z1.example.com z2.example.com ]
        * Clone Set: ctdb_fs-clone [ctdb_fs]:
      	* Started: [ z1.example.com z2.example.com ]
        * Clone Set: csmb_fs1-clone [csmb_fs1]:
      	* Started: [ z1.example.com z2.example.com ]
         * Clone Set: samba-group-clone [samba-group]:
      	* Started: [ z1.example.com z2.example.com ]
  2. On both nodes in the cluster, add a local user for the test share directory.

    1. Add the user.

      # useradd -M -s /sbin/nologin example_user
    2. Set a password for the user.

      # passwd example_user
    3. Set an SMB password for the user.

      # smbpasswd -a example_user
      New SMB password:
      Retype new SMB password:
      Added user example_user
    4. Activate the user in the Samba database.

      # smbpasswd -e example_user
    5. Update the file ownership and permissions on the GFS2 share for the Samba user.

      # chown example_user:users /srv/samba/share1/
      # chmod 755 /srv/samba/share1/

8.4. Verifying clustered Samba configuration

If your clustered Samba configuration was successful, you are able to mount the Samba share. After mounting the share, you can test for Samba recovery if the cluster node that is exporting the Samba share becomes unavailable.

Procedure

  1. On a system that has access to one or more of the public IP addresses configured in the /etc/ctdb/public_addresses file on the cluster nodes, mount the Samba share using one of these public IP addresses.

    [root@testmount ~]# mkdir /mnt/sambashare
    [root@testmount ~]# mount -t cifs -o user=example_user //192.0.2.201/share1 /mnt/sambashare
    Password for example_user@//192.0.2.201/public: XXXXXXX
  2. Verify that the file system is mounted.

    [root@testmount ~]# mount | grep /mnt/sambashare
    //192.0.2.201/public on /mnt/sambashare type cifs (rw,relatime,vers=1.0,cache=strict,username=example_user,domain=LINUXSERVER,uid=0,noforceuid,gid=0,noforcegid,addr=192.0.2.201,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=65536,echo_interval=60,actimeo=1,user=example_user)
  3. Verify that you can create a file on the mounted file system.

    [root@testmount ~]# touch /mnt/sambashare/testfile1
    [root@testmount ~]# ls /mnt/sambashare
    testfile1
  4. Determine which cluster node is exporting the Samba share:

    1. On each cluster node, display the IP addresses assigned to the interface specified in the public_addresses file. The following commands display the IPv4 addresses assigned to the enp1s0 interface on each node.

      [root@z1 ~]# ip -4 addr show enp1s0 | grep inet
           inet 192.0.2.11/24 brd 192.0.2.255 scope global dynamic noprefixroute enp1s0
           inet 192.0.2.201/24 brd 192.0.2.255 scope global secondary enp1s0
      
      [root@z2 ~]# ip -4 addr show enp1s0 | grep inet
           inet 192.0.2.12/24 brd 192.0.2.255 scope global dynamic noprefixroute enp1s0
           inet 192.0.2.202/24 brd 192.0.2.255 scope global secondary enp1s0
    2. In the output of the ip command, find the node with the IP address you specified with the mount command when you mounted the share.

      In this example, the IP address specified in the mount command is 192.0.2.201. The output of the ip command shows that the IP address 192.0.2.201 is assigned to z1.example.com.

  5. Put the node exporting the Samba share in standby mode, which will cause the node to be unable to host any cluster resources.

    [root@z1 ~]# pcs node standby z1.example.com
  6. From the system on which you mounted the file system, verify that you can still create a file on the file system.

    [root@testmount ~]# touch /mnt/sambashare/testfile2
    [root@testmount ~]# ls /mnt/sambashare
    testfile1  testfile2
  7. Delete the files you have created to verify that the file system has successfully mounted. If you no longer require the file system to be mounted, unmount it at this point.

    [root@testmount ~]# rm /mnt/sambashare/testfile1 /mnt/sambashare/testfile2
    rm: remove regular empty file '/mnt/sambashare/testfile1'? y
    rm: remove regular empty file '/mnt/sambashare/testfile1'? y
    [root@testmount ~]# umount /mnt/sambashare
  8. From one of the cluster nodes, restore cluster services to the node that you previously put into standby mode. This will not necessarily move the service back to that node.

    [root@z1 ~]# pcs node unstandby z1.example.com
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.