Este conteúdo não está disponível no idioma selecionado.
3.5. Configuring the Cluster Resources
This section provides the procedure for configuring the cluster resources for this use case.
Note
It is recommended that when you create a cluster resource with the
pcs resource create
, you execute the pcs status
command immediately afterwards to verify that the resource is running. Note that if you have not configured a fencing device for your cluster, as described in Section 1.3, “Fencing Configuration”, by default the resources do not start.
If you find that the resources you configured are not running, you can run the
pcs resource debug-start resource
command to test the resource configuration. This starts the service outside of the cluster’s control and knowledge. At the point the configured resources are running again, run pcs resource cleanup resource
to make the cluster aware of the updates. For information on the pcs resource debug-start
command, see the High Availability Add-On Reference manual.
The following procedure configures the system resources. To ensure these resources all run on the same node, they are configured as part of the resource group
nfsgroup
. The resources will start in the order in which you add them to the group, and they will stop in the reverse order in which they are added to the group. Run this procedure from one node of the cluster only.
- The following command creates the LVM resource named
my_lvm
. This command specifies theexclusive=true
parameter to ensure that only the cluster is capable of activating the LVM logical volume. Because the resource groupnfsgroup
does not yet exist, this command creates the resource group.[root@z1 ~]#
pcs resource create my_lvm LVM volgrpname=my_vg
\exclusive=true --group nfsgroup
Check the status of the cluster to verify that the resource is running.root@z1 ~]#
pcs status
Cluster name: my_cluster Last updated: Thu Jan 8 11:13:17 2015 Last change: Thu Jan 8 11:13:08 2015 Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 3 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com PCSD Status: z1.example.com: Online z2.example.com: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled - Configure a
Filesystem
resource for the cluster.Note
You can specify mount options as part of the resource configuration for aFilesystem
resource with theoptions=options
parameter. Run thepcs resource describe Filesystem
command for full configuration options.The following command configures an ext4Filesystem
resource namednfsshare
as part of thenfsgroup
resource group. This file system uses the LVM volume group and ext4 file system you created in Section 3.2, “Configuring an LVM Volume with an ext4 File System” and will be mounted on the/nfsshare
directory you created in Section 3.3, “NFS Share Setup”.[root@z1 ~]#
pcs resource create nfsshare Filesystem
\device=/dev/my_vg/my_lv directory=/nfsshare
\fstype=ext4 --group nfsgroup
Verify that themy_lvm
andnfsshare
resources are running.[root@z1 ~]#
pcs status
... Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com nfsshare (ocf::heartbeat:Filesystem): Started z1.example.com ... - Create the
nfsserver
resource namednfs-daemon
part of the resource groupnfsgroup
.Note
Thenfsserver
resource allows you to specify annfs_shared_infodir
parameter, which is a directory that NFS daemons will use to store NFS-related stateful information. It is recommended that this attribute be set to a subdirectory of one of theFilesystem
resources you created in this collection of exports. This ensures that the NFS daemons are storing their stateful information on a device that will become available to another node if this resource group should need to relocate. In this example,/nfsshare
is the shared-storage directory managed by theFilesystem
resource,/nfsshare/exports/export1
and/nfsshare/exports/export2
are the export directories, and/nfsshare/nfsinfo
is the shared-information directory for thenfsserver
resource.[root@z1 ~]#
pcs resource create nfs-daemon nfsserver
\nfs_shared_infodir=/nfsshare/nfsinfo nfs_no_notify=true
\--group nfsgroup
[root@z1 ~]#pcs status
... - Add the
exportfs
resources to export the/nfsshare/exports
directory. These resources are part of the resource groupnfsgroup
. This builds a virtual directory for NFSv4 clients. NFSv3 clients can access these exports as well.[root@z1 ~]#
pcs resource create nfs-root exportfs
\clientspec=192.168.122.0/255.255.255.0
\options=rw,sync,no_root_squash
\directory=/nfsshare/exports
\fsid=0 --group nfsgroup
[root@z1 ~]# #pcs resource create nfs-export1 exportfs
\clientspec=192.168.122.0/255.255.255.0
\options=rw,sync,no_root_squash directory=/nfsshare/exports/export1
\fsid=1 --group nfsgroup
[root@z1 ~]# #pcs resource create nfs-export2 exportfs
\clientspec=192.168.122.0/255.255.255.0
\options=rw,sync,no_root_squash directory=/nfsshare/exports/export2
\fsid=2 --group nfsgroup
- Add the floating IP address resource that NFS clients will use to access the NFS share. The floating IP address that you specify requires a reverse DNS lookup or it must be specified in the
/etc/hosts
on all nodes in the cluster. This resource is part of the resource groupnfsgroup
. For this example deployment, we are using 192.168.122.200 as the floating IP address.[root@z1 ~]#
pcs resource create nfs_ip IPaddr2
\ip=192.168.122.200 cidr_netmask=24 --group nfsgroup
- Add an
nfsnotify
resource for sending NFSv3 reboot notifications once the entire NFS deployment has initialized. This resource is part of the resource groupnfsgroup
.Note
For the NFS notification to be processed correctly, the floating IP address must have a host name associated with it that is consistent on both the NFS servers and the NFS client.[root@z1 ~]#
pcs resource create nfs-notify nfsnotify
\source_host=192.168.122.200 --group nfsgroup
After creating the resources and the resource constraints, you can check the status of the cluster. Note that all resources are running on the same node.
[root@z1 ~]# pcs status
...
Full list of resources:
myapc (stonith:fence_apc_snmp): Started z1.example.com
Resource Group: nfsgroup
my_lvm (ocf::heartbeat:LVM): Started z1.example.com
nfsshare (ocf::heartbeat:Filesystem): Started z1.example.com
nfs-daemon (ocf::heartbeat:nfsserver): Started z1.example.com
nfs-root (ocf::heartbeat:exportfs): Started z1.example.com
nfs-export1 (ocf::heartbeat:exportfs): Started z1.example.com
nfs-export2 (ocf::heartbeat:exportfs): Started z1.example.com
nfs_ip (ocf::heartbeat:IPaddr2): Started z1.example.com
nfs-notify (ocf::heartbeat:nfsnotify): Started z1.example.com
...