6.3. SMB


The Server Message Block (SMB) protocol can be used to access Red Hat Gluster Storage volumes by exporting directories in GlusterFS volumes as SMB shares on the server.
This section describes how to enable SMB shares, how to mount SMB shares on Microsoft Windows-based clients (both manually and automatically) and how to verify if the share has been mounted successfully.

Note

SMB access using the Mac OS X Finder is not supported.
The Mac OS X command line can be used to access Red Hat Gluster Storage volumes using SMB.
In Red Hat Gluster Storage, Samba is used to share volumes through SMB protocol.

Warning

  • The Samba version 3 is not supported. Ensure that you are using Samba-4.x. For more information regarding the installation and upgrade steps refer the Red Hat Gluster Storage 3.2 Installation Guide.
  • CTDB version 4.x is required for Red Hat Gluster Storage 3.2. This is provided in the Red Hat Gluster Storage Samba channel. For more information regarding the installation and upgrade steps refer the Red Hat Gluster Storage 3.2 Installation Guide.

Important

On Red Hat Enterprise Linux 7, enable the Samba firewall service in the active zones for runtime and permanent mode using the following commands:
To get a list of active zones, run the following command:
# firewall-cmd --get-active-zones
To allow the firewall services in the active zones, run the following commands
# firewall-cmd --zone=zone_name --add-service=samba
# firewall-cmd --zone=zone_name --add-service=samba  --permanent

6.3.1. Setting up CTDB for Samba

In a replicated volume environment, the CTDB software (Cluster Trivial Database) has to be configured to provide high availability and lock synchronization for Samba shares. CTDB provides high availability by adding virtual IP addresses (VIPs) and a heartbeat service.
When a node in the trusted storage pool fails, CTDB enables a different node to take over the virtual IP addresses that the failed node was hosting. This ensures the IP addresses for the services provided are always available.

Important

On Red Hat Enterprise Linux 7, enable the CTDB firewall service in the active zones for runtime and permanent mode using the below commands:
To get a list of active zones, run the following command:
# firewall-cmd --get-active-zones
To add ports to the active zones, run the following commands:
# firewall-cmd --zone=zone_name --add-port=4379/tcp
# firewall-cmd --zone=zone_name --add-port=4379/tcp  --permanent

Note

Amazon Elastic Compute Cloud (EC2) does not support VIPs and is hence not compatible with this solution.
Prerequisites

Follow these steps before configuring CTDB on a Red Hat Gluster Storage Server:

  • If you already have an older version of CTDB (version <= ctdb1.x), then remove CTDB by executing the following command:
    # yum remove ctdb
    After removing the older version, proceed with installing the latest CTDB.

    Note

    Ensure that the system is subscribed to the samba channel to get the latest CTDB packages.
  • Install CTDB on all the nodes that are used as Samba servers to the latest version using the following command:
    # yum install ctdb
  • In a CTDB based high availability environment of Samba , the locks will not be migrated on failover.
  • You must ensure to open TCP port 4379 between the Red Hat Gluster Storage servers: This is the internode communication port of CTDB.
Configuring CTDB on Red Hat Gluster Storage Server

To configure CTDB on Red Hat Gluster Storage server, execute the following steps

  1. Create a replicate volume. This volume will host only a zero byte lock file, hence choose minimal sized bricks. To create a replicate volume run the following command:
    # gluster volume create volname replica n ipaddress:/brick path.......N times
    where,
    N: The number of nodes that are used as Samba servers. Each node must host one brick.
    For example:
    # gluster volume create ctdb replica 4 10.16.157.75:/rhgs/brick1/ctdb/b1 10.16.157.78:/rhgs/brick1/ctdb/b2 10.16.157.81:/rhgs/brick1/ctdb/b3 10.16.157.84:/rhgs/brick1/ctdb/b4
  2. In the following files, replace "all" in the statement META="all" to the newly created volume name
    /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
    /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
    For example:
    META="all"
      to
    META="ctdb"
  3. In the /etc/samba/smb.conf file add the following line in the global section on all the nodes:
    clustering=yes
  4. Start the volume.
    The S29CTDBsetup.sh script runs on all Red Hat Gluster Storage servers, adds an entry in /etc/fstab/ for the mount, and mounts the volume at /gluster/lock on all the nodes with Samba server. It also enables automatic start of CTDB service on reboot.

    Note

    When you stop the special CTDB volume, the S29CTDB-teardown.sh script runs on all Red Hat Gluster Storage servers and removes an entry in /etc/fstab/ for the mount and unmounts the volume at /gluster/lock.
  5. Verify if the file /etc/sysconfig/ctdb exists on all the nodes that is used as Samba server. This file contains Red Hat Gluster Storage recommended CTDB configurations.
  6. Create /etc/ctdb/nodes file on all the nodes that is used as Samba servers and add the IPs of these nodes to the file.
    10.16.157.0
    10.16.157.3
    10.16.157.6
    10.16.157.9
    The IPs listed here are the private IPs of Samba servers.
  7. On all the nodes that are used as Samba server which require IP failover, create /etc/ctdb/public_addresses file and add the virtual IPs that CTDB should create to this file. Add these IP address in the following format:
    <Virtual IP>/<routing prefix><node interface>
    
    For example:
    192.168.1.20/24 eth0
    192.168.1.21/24 eth0
  8. Start the CTDB service on all the nodes by executing the following command:
    # service ctdb start

6.3.2. Sharing Volumes over SMB

The following configuration items have to be implemented before using SMB with Red Hat Gluster Storage.
  1. Run the following command to allow Samba to communicate with brick processes even with untrusted ports.
    # gluster volume set VOLNAME server.allow-insecure on
  2. Run the following command to enable SMB specific caching
    # gluster volume set <volname> performance.cache-samba-metadata on
    
    volume set success

    Note

    Enable generic metadata caching to improve the performance of SMB access to Red Hat Gluster Storage volumes. For more information see Section 20.7, “Directory Operations”
  3. Edit the /etc/glusterfs/glusterd.vol in each Red Hat Gluster Storage node, and add the following setting:
    option rpc-auth-allow-insecure on

    Note

    This allows Samba to communicate with glusterd even with untrusted ports.
  4. Restart glusterd service on each Red Hat Gluster Storage node.
  5. Run the following command to verify proper lock and I/O coherency.
    # gluster volume set VOLNAME storage.batch-fsync-delay-usec 0
  6. To verify if the volume can be accessed from the SMB/CIFS share, run the following command:
    # smbclient -L <hostname> -U%
    For example:
    # smbclient -L rhs-vm1 -U%
    Domain=[MYGROUP] OS=[Unix] Server=[Samba 4.1.17]
    
         Sharename       Type      Comment
         ---------       ----      -------
         IPC$            IPC       IPC Service (Samba Server Version 4.1.17)
         gluster-vol1    Disk      For samba share of volume vol1
    Domain=[MYGROUP] OS=[Unix] Server=[Samba 4.1.17]
    
         Server               Comment
         ---------            -------
    
         Workgroup            Master
         ---------            -------
  7. To verify if the SMB/CIFS share can be accessed by the user, run the following command:
    #  smbclient //<hostname>/gluster-<volname> -U <username>%<password>
    For example:
    # smbclient //10.0.0.1/gluster-vol1 -U root%redhat
    Domain=[MYGROUP] OS=[Unix] Server=[Samba 4.1.17]
    smb: \> mkdir test
    smb: \> cd test\
    smb: \test\> pwd
    Current directory is \\10.0.0.1\gluster-vol1\test\
    smb: \test\>
When a volume is started using the gluster volume start VOLNAME command, the volume is automatically exported through Samba on all Red Hat Gluster Storage servers running Samba.
To be able to mount from any server in the trusted storage pool, repeat these steps on each Red Hat Gluster Storage node. For more advanced configurations, refer to the Samba documentation.
  1. Open the /etc/samba/smb.conf file in a text editor and add the following lines for a simple configuration:
    [gluster-VOLNAME]
    comment = For samba share of volume VOLNAME
    vfs objects = glusterfs
    glusterfs:volume = VOLNAME
    glusterfs:logfile = /var/log/samba/VOLNAME.log
    glusterfs:loglevel = 7
    path = /
    read only = no
    guest ok = yes
    The configuration options are described in the following table:
    Table 6.7. Configuration Options
    Configuration Options Required? Default Value Description
    Path Yes n/a It represents the path that is relative to the root of the gluster volume that is being shared. Hence / represents the root of the gluster volume. Exporting a subdirectory of a volume is supported and /subdir in path exports only that subdirectory of the volume.
    glusterfs:volume Yes n/a The volume name that is shared.
    glusterfs:logfile No NULL Path to the log file that will be used by the gluster modules that are loaded by the vfs plugin. Standard Samba variable substitutions as mentioned in smb.conf are supported.
    glusterfs:loglevel No 7 This option is equivalent to the client-log-level option of gluster. 7 is the default value and corresponds to the INFO level.
    glusterfs:volfile_server No localhost The gluster server to be contacted to fetch the volfile for the volume. It takes the value, which is a list of white space separated elements, where each element is unix+/path/to/socket/file or [tcp+]IP|hostname|\[IPv6\][:port]
  2. Run service smb [re]start to start or restart the smb service.
  3. Run smbpasswd to set the SMB password.
    # smbpasswd -a username
    Specify the SMB password. This password is used during the SMB mount.

6.3.3. Mounting Volumes using SMB

Samba follows the permissions on the shared directory, and uses the logged in username to perform access control.
To allow a non root user to read/write into the mounted volume, ensure you execute the following steps:
  1. Add the user on all the Samba servers based on your configuration:
    # adduser username
  2. Add the user to the list of Samba users on all Samba servers and assign password by executing the following command:
    # smbpasswd -a username
  3. Perform a FUSE mount of the gluster volume on any one of the Samba servers:
    # mount -t glusterfs -o acl ip-address:/volname /mountpoint
    For example:
    # mount -t glusterfs -o acl rhs-a:/repvol /mnt
  4. Provide required permissions to the user by executing appropriate setfacl command. For example:
    # setfacl -m user:username:rwx mountpoint
    For example:
    # setfacl -m user:cifsuser:rwx /mnt

6.3.3.1. Manually Mounting Volumes Using SMB on Red Hat Enterprise Linux and Windows

  • Mounting a Volume Manually using SMB on Red Hat Enterprise Linux
  • Mounting a Volume Manually using SMB through Microsoft Windows Explorer
  • Mounting a Volume Manually using SMB on Microsoft Windows Command-line.

Mounting a Volume Manually using SMB on Red Hat Enterprise Linux

To mount a Red Hat Gluster Storage volume manually using Server Message Block (SMB) on Red Hat Enterprise Linux by executing the following steps:
  1. Install the cifs-utils package on the client.
    # yum install cifs-utils
  2. Run mount -t cifs to mount the exported SMB share, using the syntax example as guidance.
    # mount -t cifs -o user=<username>,pass=<password> //<hostname>/gluster-<volname> /<mountpoint>
    For example:
    # mount -t cifs -o user=cifsuser,pass=redhat //rhs-a/gluster-repvol /cifs
  3. Run # smbstatus -S on the server to display the status of the volume:
    Service        pid     machine             Connected at
    -------------------------------------------------------------------
    gluster-VOLNAME 11967   __ffff_192.168.1.60  Mon Aug  6 02:23:25 2012

Mounting a Volume Manually using SMB through Microsoft Windows Explorer

To mount a Red Hat Gluster Storage volume manually using Server Message Block (SMB) on Microsoft Windows using Windows Explorer, follow these steps:
  1. In Windows Explorer, click Tools Map Network Drive…. to open the Map Network Drive screen.
  2. Choose the drive letter using the Drive drop-down list.
  3. In the Folder text box, specify the path of the server and the shared resource in the following format: \\SERVER_NAME\VOLNAME.
  4. Click Finish to complete the process, and display the network drive in Windows Explorer.
  5. Navigate to the network drive to verify it has mounted correctly.

Mounting a Volume Manually using SMB on Microsoft Windows Command-line.

To mount a Red Hat Gluster Storage volume manually using Server Message Block (SMB) on Microsoft Windows using Windows Explorer, follow these steps:
  1. Click Start Run, and then type cmd.
  2. Enter net use z: \\SERVER_NAME\VOLNAME, where z: is the drive letter to assign to the shared volume.
    For example, net use y: \\server1\test-volume
  3. Navigate to the network drive to verify it has mounted correctly.

6.3.3.2. Automatically Mounting Volumes Using SMB on Red Hat Enterprise Linux and Windows

You can configure your system to automatically mount Red Hat Gluster Storage volumes using SMB on Microsoft Windows-based clients each time the system starts.
  • Mounting a Volume Automatically using SMB on Red Hat Enterprise Linux
  • Mounting a Volume Automatically on Server Start using SMB through Microsoft Windows Explorer

Mounting a Volume Automatically using SMB on Red Hat Enterprise Linux

To mount a Red Hat Gluster Storage Volume automatically using SMB at server start execute the following steps:
  1. Open the /etc/fstab file in a text editor.
  2. Append the following configuration to the fstab file.
    You must specify the filename and its path that contains the user name and/or password in the credentials option in /etc/fstab file. See the mount.cifs man page for more information.
    \\HOSTNAME|IPADDRESS\SHARE_NAME MOUNTDIR
    Using the example server names, the entry contains the following replaced values.
    \\server1\test-volume /mnt/glusterfs cifs credentials=/etc/samba/passwd,_netdev 0 0
  3. Run # smbstatus -S on the client to display the status of the volume:
    Service        pid     machine             Connected at
    -------------------------------------------------------------------
    gluster-VOLNAME 11967   __ffff_192.168.1.60  Mon Aug  6 02:23:25 2012

Mounting a Volume Automatically on Server Start using SMB through Microsoft Windows Explorer

To mount a Red Hat Gluster Storage volume manually using Server Message Block (SMB) on Microsoft Windows using Windows Explorer, follow these steps:
  1. In Windows Explorer, click Tools Map Network Drive…. to open the Map Network Drive screen.
  2. Choose the drive letter using the Drive drop-down list.
  3. In the Folder text box, specify the path of the server and the shared resource in the following format: \\SERVER_NAME\VOLNAME.
  4. Click the Reconnect at logon check box.
  5. Click Finish to complete the process, and display the network drive in Windows Explorer.
  6. If the Windows Security screen pops up, enter the username and password and click OK.
  7. Navigate to the network drive to verify it has mounted correctly.

6.3.4. Starting and Verifying your Configuration

Perform the following to start and verify your configuration:

Verify the Configuration

Verify the virtual IP (VIP) addresses of a shut down server are carried over to another server in the replicated volume.
  1. Verify that CTDB is running using the following commands:
    # ctdb status
    # ctdb ip
    # ctdb ping -n all
  2. Mount a Red Hat Gluster Storage volume using any one of the VIPs.
  3. Run # ctdb ip to locate the physical server serving the VIP.
  4. Shut down the CTDB VIP server to verify successful configuration.
    When the Red Hat Gluster Storage server serving the VIP is shut down there will be a pause for a few seconds, then I/O will resume.

6.3.5. Disabling SMB Shares

To stop automatic sharing on all nodes for all volumes execute the following steps:

  1. On all Red Hat Gluster Storage Servers, with elevated privileges, navigate to /var/lib/glusterd/hooks/1/start/post
  2. Rename the S30samba-start.sh to K30samba-start.sh.
    For more information about these scripts, see Section 16.2, “Prepackaged Scripts”.
To stop automatic sharing on all nodes for one particular volume:

  1. Run the following command to disable automatic SMB sharing per-volume:
    # gluster volume set <VOLNAME> user.smb disable

6.3.6. Accessing Snapshots in Windows

A snapshot is a read-only point-in-time copy of the volume. Windows has an inbuilt mechanism to browse snapshots via Volume Shadow-copy Service (also known as VSS). Using this feature users can access the previous versions of any file or folder with minimal steps.

Note

Shadow Copy (also known as Volume Shadow-copy Service, or VSS) is a technology included in Microsoft Windows that allows taking snapshots of computer files or volumes, apart from viewing snapshots. Currently we only support viewing of snapshots. Creation of snapshots with this interface is NOT supported.

6.3.6.1. Configuring Shadow Copy

To configure shadow copy, the following configurations must be modified/edited in the smb.conf file. The smb.conf file is located at etc/samba/smb.conf.

Note

Ensure, shadow_copy2 module is enabled in smb.conf. To enable add the following parameter to the vfs objects option.
For example:
vfs objects = shadow_copy2 glusterfs
Table 6.8. Configuration Options
Configuration Options Required? Default Value Description
shadow:snapdir Yes n/a Path to the directory where snapshots are kept. The snapdir name should be .snaps.
shadow:basedir Yes n/aPath to the base directory that snapshots are from. The basedir value should be /.
shadow:sort Optional unsorted The supported values are asc/desc. By this parameter one can specify that the shadow copy directories should be sorted before they are sent to the client. This can be beneficial as unix filesystems are usually not listed alphabetically sorted. If enabled, it is specified in descending order.
shadow:localtime Optional UTC This is an optional parameter that indicates whether the snapshot names are in UTC/GMT or in local time.
shadow:format Yes n/a This parameter specifies the format specification for the naming of snapshots. The format must be compatible with the conversion specifications recognized by str[fp]time. The default value is _GMT-%Y.%m.%d-%H.%M.%S.
shadow:fixinodesOptionalNo If you enable shadow:fixinodes then this module will modify the apparent inode number of files in the snapshot directories using a hash of the files path. This is needed for snapshot systems where the snapshots have the same device:inode number as the original files (such as happens with GPFS snapshots). If you don't set this option then the 'restore' button in the shadow copy UI will fail with a sharing violation.
shadow:snapprefixOptionaln/aRegular expression to match prefix of snapshot name. Red Hat Gluster Storage only supports Basic Regular Expression (BRE)
shadow:delimiterOptional_GMTdelimiter is used to separate shadow:snapprefix and shadow:format.
Following is an example of the smb.conf file:
[gluster-vol0]
comment = For samba share of volume vol0
vfs objects = shadow_copy2 glusterfs
glusterfs:volume = vol0
glusterfs:logfile = /var/log/samba/glusterfs-vol0.%M.log
glusterfs:loglevel = 3
path = /
read only = no
guest ok = yes
shadow:snapdir = /.snaps
shadow:basedir = /
shadow:sort = desc
shadow:snapprefix= ^S[A-Za-z0-9]*p$
shadow:format = _GMT-%Y.%m.%d-%H.%M.%S
In the above example, the mentioned parameters have to be added in the smb.conf file to enable shadow copy. The options mentioned are not mandatory.
Shadow copy will filter all the snapshots based on the smb.conf entries. It will only show those snapshots which matches the criteria. In the example mentioned earlier, the snapshot name should start with an 'S' and end with 'p' and any alpha numeric characters in between is considered for the search. For example in the list of the following snapshots, the first two snapshots will be shown by Windows and the last one will be ignored. Hence, these options will help us filter out what snapshots to show and what not to.
Snap_GMT-2016.06.06-06.06.06
Sl123p_GMT-2016.07.07-07.07.07
xyz_GMT-2016.08.08-08.08.08
After editing the smb.conf file, execute the following steps to enable snapshot access:
  1. Run service smb [re]start to start or restart the smb service.
  2. Enable User Serviceable Snapshot (USS) for Samba. For more information see Section 8.13, “User Serviceable Snapshots”

6.3.6.2. Accessing Snapshot

To access snapshot on the Windows system, execute the following steps:
  1. Right Click on the file or directory for which the previous version is required.
  2. Click on Restore previous versions.
  3. In the dialog box, select the Date/Time of the previous version of the file, and select either Open, Restore, or Copy.
    where,
    Open: Lets you open the required version of the file in read-only mode.
    Restore: Restores the file back to the selected version.
    Copy: Lets you copy the file to a different location.
    Accessing Snapshot

    Figure 6.1. Accessing Snapshot

6.3.7. Tuning Performance

In order to improve the performance of SMB access of Red Hat Gluster Storage volumes, the maximum metadata (stat, xattr) caching time on the client side is increased to 10 minutes. This enhancement also ensures the consistency of the cache.
A significant performance improvements are observed in the following workloads:
  • Listing of directories (recursive)
  • Creating files
  • Deleting files
  • Renaming files

6.3.7.1. Enabling Metadata Caching

To enable metadata caching, execute the following commands from any one of the nodes on the trusted storage pool in the order mentioned below.
  1. To enable cache invalidation and increase the timeout to 10 minutes execute the following commands:
    # gluster volume set <volname> features.cache-invalidation on
    
    volume set success
    # gluster volume set <volname> features.cache-invalidation-timeout 600
    
    volume set success
    To enable metadata caching on the client and to maintain cache consistency execute the following commands:
    # gluster volume set <volname> performance.stat-prefetch on
    
    volume set success
    # gluster volume set <volname> performance.cache-invalidation on
    
    volume set success
    # gluster volume set <volname> performance.cache-samba-metadata on
    
    volume set success
  2. To increase the client side metadata cache timeout to 10 minutes, execute the following command:
    # gluster volume set <volname> performance.md-cache-timeout 600
    
    volume set success
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.