Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Configuration Example - Oracle HA on Cluster Suite
Configuring Oracle for High Availability (HA) on Red Hat Cluster Suite
Edition 1
Abstract
Chapter 1. Introduction Link kopierenLink in die Zwischenablage kopiert!
1.1. About This Guide Link kopierenLink in die Zwischenablage kopiert!
1.2. Audience Link kopierenLink in die Zwischenablage kopiert!
1.3. Related Documentation Link kopierenLink in die Zwischenablage kopiert!
- Red Hat Cluster Suite Overview — Provides a high level overview of the Red Hat Cluster Suite.
- Logical Volume Manager Administration — Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment.
- Global File System: Configuration and Administration — Provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System).
- Using Device-Mapper Multipath — Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 5.
- Red Hat Cluster Suite Release Notes — Provides information about the current release of Red Hat Cluster Suite.
Chapter 2. Overview Link kopierenLink in die Zwischenablage kopiert!
- Simple RDBMS Enterprise Edition failover
- Oracle RDMBS Real Applications Cluster (RAC) on shared GFS file systems
Note
2.1. Oracle Enterprise Edition HA Components Link kopierenLink in die Zwischenablage kopiert!
2.1.1. Oracle Enterprise Edition HA for Red Hat Cluster Suite Link kopierenLink in die Zwischenablage kopiert!
2.1.2. Oracle Real Application Clusters for Red Hat Cluster Suite and GFS Link kopierenLink in die Zwischenablage kopiert!
2.2. Sample Two-Node Cluster Link kopierenLink in die Zwischenablage kopiert!
Figure 2.1. Sample Two-Node Oracle Cluster
Figure 2.2. Cluster Node Connections
Note
2.3. Storage Considerations Link kopierenLink in die Zwischenablage kopiert!
Note
Note
Note
- Rotational Speed – AKA spindle speed (RPM)
- Average Latency – Time for sector being accessed to be under a r/w head
- Average Seek – Time it takes for hard drive's r/w head to position itself over the track to be read or written.
4 X 1TB 10kRPM SAS (RAID 0)
Avg. Latency = 3ms
Avg. Seek = 4.45ms
2.4. Storage Topology and DM-Multipath Link kopierenLink in die Zwischenablage kopiert!
2.5. Fencing Topology Link kopierenLink in die Zwischenablage kopiert!
2.6. Network Topology Overview Link kopierenLink in die Zwischenablage kopiert!
- The application tier, on behalf of users, must have access to the database nodes.
- The Red Hat Cluster Suite node monitoring services need access to all nodes to determine the state of the cluster.
- In the RAC case, Oracle Clusterware needs a high-speed pathway to implement Oracle Cache Fusion (or Global Cache Services – GCS).
Note
Chapter 3. Hardware Installation and Configuration Link kopierenLink in die Zwischenablage kopiert!
3.1. Server Node Link kopierenLink in die Zwischenablage kopiert!
3.2. Storage Topology Link kopierenLink in die Zwischenablage kopiert!
3.2.1. Storage Allocation Link kopierenLink in die Zwischenablage kopiert!
Warning
AUTOEXETEND disabled. This generates alerts that cause DBAs to be notified of dynamic growth requests in the database. No two shops have the same policy towards AUTOEXTEND.
ARCHIVELOG is usually enabled for production databases).
Note
ORACLE_HOME. This file system should not contain any database files. This LUN must only hold the product home, and spare capacity for trace files. It could be as small as 8GB.
Note
ORA_CRS_HOME) cannot be located on a clustered GFS mount point.
Note
3.3. Network Topology Link kopierenLink in die Zwischenablage kopiert!
3.4. RAC/GFS Considerations Link kopierenLink in die Zwischenablage kopiert!
- Oracle Clusterware implements Virtual IP routing so that target IP addresses of the failed node can be quickly taken over by the surviving node. This means new connections see little or no delay.
- In the GFS/RAC cluster, Oracle uses the back-side network to implement Oracle Global Cache Fusion (GCS) and database blocks can be moved between nodes over this link. This can place extra load on this link, and for certain workloads, a second dedicated backside network might be required.
- Bonded links using LACP (Link Aggregation Control Protocol) for higher capacity, GCS links, using multiple GbE links are supported, but not extensively tested. Customers may also run the simple, two-NIC bond in load-balance, but the recommendation is to use this for failover, especially in the two-node case.
- Oracle GCS can also be implemented over Infiniband using the Reliable Data Sockets (RDS) protocol. This provides an extremely low latency, memory-to-memory connection. This strategy is more often required in high node-count clusters, which implement data warehouses. In these larger clusters, the inter-node traffic (and GCS coherency protocol) easily exhausts the capacity of conventional GbE/udp links.
- Oracle RAC has other strategies to preserve existing sessions and transactions from the failed node (Oracle Transparent Session and Application Migration/Failover). Most customers do not implement these features. However, they are available, and near non-stop failover is possible with RAC. These features are not available in the Cold Failover configuration, so the client tier must be configured accordingly.
- Oracle RAC is quite expensive, but can provide that last 5% of uptime that might make the extra cost worth every nickel. A simple two-node Red Hat Cluster Suite Oracle Failover cluster only requires one Enterprise Edition license. The two-node RAC/GFS cluster requires two Enterprise Edition licenses and a separately priced license for RAC (and partitioning).
3.5. Fencing Configuration Link kopierenLink in die Zwischenablage kopiert!
qdisk, and the cman heartbeat mechanism that operates over the private, bonded network. If either node fails to “check-in” within a prescribed time, actions are taken to remove, or fence the node from the rest of the active cluster. Fencing is the most important job that a cluster product must do. Inconsistent, or unreliable fencing can result in corruption of the Oracle database -- it must be bulletproof.
Chapter 4. Software Installation and Configuration Link kopierenLink in die Zwischenablage kopiert!
4.1. RHEL Server Base Link kopierenLink in die Zwischenablage kopiert!
- This example is an NFS-based install. As always, no two kickstart files are the same.
- Customers often use auto-allocation, which creates a single logical volume to create partitions. It is not necessary to separate the root directories into separate mounts. A 6GB root partition is probably overkill for an Oracle node. In either install configuration,
ORACLE_HOMEmust be installed on an external LUN.ORA_CRS_HOME(Oracle Clusterware for RAC/GFS) must be installed on a local partition on each node. The example below is from our RAC/GFS node. - Only the groups listed below are required. All other packages and groups are included at the customer’s discretion.
- SELINUX must be disabled for all releases of Oracle, except 11gR2.
- Firewalls are disabled, and not required (customer discretion).
- Deadline I/O scheduling is generally recommended, but some warehouse workloads might benefit from other algorithms.
device scsi cciss
device scsi qla2300
install
nfs --server=192.168.1.212 --dir=/vol/ed/jneedham/ISO/RHEL5/U3/64
reboot yes
lang en_US.UTF-8
keyboard us
network --device eth0 --bootproto=static --device=eth0 --gateway=192.168.1.1 --ip=192.168.1.1 --ip=192.168.1.114 --nameserver=139.95.251.1 --netmask=255.255.255.0 --onboot=on
rootpw "oracleha"
authconfig --enableshadow --enablemd5
selinux --disabled
firewall --disabled --port=22:tcp
timezone --utc America/Vancouver
bootloader --location=m br --driveorder=cciss/c0d0 --append="elevator=deadline"
# P A R T I T I O N S P E C I F I C A T I O N
part swap --fstype swap --ondisk=cciss/c0d0 --usepart=cciss/c0d0p2 --size=16384 --asprimary
part / --fstype ext3 --ondisk=cciss/c0d0 --usepart=cciss/c0d0p3 --size=6144 --asprimary
part /ee --fstype ext3 --ondisk=cciss/c0d0 --usepart=cciss/c0d0p5 --noformat --size 32768
%packages
@development-libs
@x-software-development
@core
@base
@legacy-software-development
@java
@legacy-software-support
@base-x
@development-tools
@cluster-storage
@clustering
sysstat
/etc/rc3.d file. Most of these services are not required when the server is configured for Oracle use.
Note
ORA_CRS_HOME must live on this mount, we recommend that you rebuild the file system with the maximum journal size:
$ mke2fs –j –J size=400 /dev/cciss/cod0p5
4.2. Storage Topology Link kopierenLink in die Zwischenablage kopiert!
4.2.1. HBA WWPN Mapping Link kopierenLink in die Zwischenablage kopiert!
/sys directory, but the switch often has logged the port names as well, so you can look there if you know how the HBAs are connected to the switch.
$ cat /sys/class/block/fc_host/host0/port_name
0x210000e08b806ba0
$ cat /sys/class/block/fc_host/host1/port_name
0x210100e08ba06ba0
/sys inquiry. Do not use the WWNN or node name. WWPNs needed to be added to the initiator group on the array, and to the appropriate zone on the switch. Once these steps are complete, reboot the server and you should see two sets of identical LUNS. You cannot proceed to the multipath configuration section until there are two identical sets.
4.2.2. Multipath Configuration Link kopierenLink in die Zwischenablage kopiert!
/dev/mapper.
rc service and a disabled /etc/multipath.conf file. The task in this section is to create reasonable aliases for the LUN, and also to define how failure processing is managed. The default configuration in this file is to blacklist everything, so this clause must be modified, removed, or commented out and then multipath must be restarted or refreshed. Be sure the multipah daemon is set to run at reboot. Also, reboot of the server should take place now to ensure that the duplicate sets of LUN are visible.
scsi_id command on each LUN.
$ scsi_id -g -s /block/sdc #External LUN, returns 360a9800056724671684a514137392d65
$ scsi_id -g -s /block/sdd #External LUN, returns 360a9800056724671684a502d34555579
multipath.conf file.
multipath {
no_path_retry fail
wwid 360a9800056724671684a514137392d65
alias qdisk
}
#The following 3 are voting disks that are necessary ONLY for the RAC/GFS configuration!
multipath {
no_path_retry fail
wwid 360a9800056724671684a502d34555579
alias vote1
}
multipath {
no_path_retry fail
wwid 360a9800056724671684a502d34555578
alias vote2
}
multipath {
no_path_retry fail
wwid 360a9800056724671684a502d34555577
alias vote3
}
path_grouping_policy (set to failover) and path_checker (set to tur). Historically, the default was to readsector0, or directio, both of which create an I/O request. For voting disks on highly loaded clusters, this may cause voting “jitter”. The least invasive path checking policy is TUR (Test Unit Ready), and rarely disturbs qdisk or Clusterware voting. TUR and zone isolation both reduce voting jitter. The voting LUNS could be further isolated into their own zone, but this would require dedicated WWPN pathways; this would likely be more trouble than it is worth.
multipath.conf file, including procedures, defined by the prio_callout parameter. Check with the vendor.
multipath.conf file.
defaults {
user_friendly_names yes
udev_dir /dev
polling_interval 10
selector "round-robin 0"
path_grouping_policy failover
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout /bin/true
path_checker tur
rr_min_io 100
rr_weight priorities
failback immediate
no_path_retry fail
user_friendly_name yes
}
multipath.conf file is complete, try restarting the multipath service.
$ service multipathd restart
$ tail -f /var/log/messages #Should see aliases listed
$ chkconfig multipathd on
multibus is slower than failover in certain situations.
failback or a faster polling_interval, the bulk of the recovery latency is in the cluster take-over at the cluster and Oracle recover layers. If high-speed takeover is a critical requirement, then consider using RAC
Note
4.2.3. qdisk Configuration Link kopierenLink in die Zwischenablage kopiert!
/dev/mapper directory. The /dev/mapper/qdisk inode will need to be initialized and enabled as a service This is the one of the first pieces of info you need for the /etc/cluster.conf file.
$ mkqdisk –l HA585 –c /dev/mapper/qdisk
cluster.conf file looks like the following.
<?xml version="1.0"?>
<cluster config_version="1" name="HA585">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<quorumd interval="7" device="/dev/mapper/qdisk" tko="9" votes="3" log_level="5"/>
Note
tune2fs -l /dev/mapper/vg1-oracle |grep -i "journal inode"
debugfs -R "stat <8>" /dev/mapper/vg1-oracle 2>&1 | awk '/Size: /{print $6}
tune2fs -O ^has_journal /dev/mapper/vg1-oracle
tune2fs -J size=400 /dev/mapper/vg1-oracle
Warning
Note
4.3. Network Topology Link kopierenLink in die Zwischenablage kopiert!
Note
ARP to ensure the correct behavior in the event of a link failure.
4.3.1. Public Network Link kopierenLink in die Zwischenablage kopiert!
Note
4.3.2. Red Hat Cluster Suite Network Link kopierenLink in die Zwischenablage kopiert!
Note
/etc/modprobe.conf contains all four interfaces, and the two ports of the e1000 will be bonded together. The options for bond0 set the bond for failover (not load balance), and the sampling interval is 100ms. Once the file modprobe.conf file is modified, either remove and reload the e1000 kernel module, or the modification will take effect at the next reboot.
alias eth0 tg3
alias eth1 tg3
alias eth2 e1000
alias eth3 e1000
alias bond0 bonding
options bond0 mode=1 miimon=100
ifcfg-eth2
# Intel Corporation 82546GB Gigabit Ethernet Controller
DEVICE=eth2
HWADDR=00:04:23:D4:88:BE
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
TYPE=Ethernet
ONBOOT=no
ifcfg-eth3
# Intel Corporation 82546GB Gigabit Ethernet Controller
DEVICE=eth3
HWADDR=00:04:23:D4:88:BF
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
TYPE=Ethernet
ONBOOT=no
ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.2.162
NETMASK=255.255.255.0
NETWORK=192.168.2.0
BROADCAST=192.168.2.255
BOOTPROTO=none
TYPE=Ethernet
ONBOOT=yes
4.3.3. Fencing Network Link kopierenLink in die Zwischenablage kopiert!
/etc/cluster.conf file. Most IPMI-based interfaces only have one network interface, which may prove to be a single point of failure for the fencing mechanism. A unique feature of Red Hat Cluster Suite is the ability to nest fence domains to provide an alternative fence method, in case the BMC pathway fails. A switched Power Distribution Unit (PDU) can be configured (and it frequently has only one port). We do not recommend the use of FCP port fencing, nor T.10 SCSI reservations fence agent for mission critical database applications. The address and user/password must also be correct in the /etc/cluster/conf file.
<fencedevices>
<fencedevice agent="fence_ilo" hostname="192.168.1.7" login="rac" name="jLO7" passwd="jeff99"/>
<fencedevice agent="fence_ilo" hostname="192.168.1.8" login=”rac” name="jLO8" passwd="jeff99"/>
</fencedevices>
Note
fence_node command. Test early and often.
4.3.4. Red Hat Cluster Suite services Link kopierenLink in die Zwischenablage kopiert!
cluster.conf file can be completed and parts of the cluster can be initialized. Red Hat Cluster Suite consists of a set of services (cman, qdisk, fenced) that ensure cluster integrity. The values below are from the RAC example, with HA values in comments. The timeouts are good starting points for either configuration and comments give the HA equivalent. More details on the RAC example will be provided in Chapter 5, RAC/GFS Cluster Configuration. More details on the HA example will be provided in Chapter 6, Cold Failover Cluster Configuration.
<cluster config_version="2" name="HA585">
<fence_daemon post_fail_delay="0" post_join_delay="3" />
<quorumd interval="7" device="/dev/mapper/qdisk" tko="9" votes="1" log_level="5"/>
<cman deadnode_timeout="30" expected_nodes="7"/>
<!-- cman deadnode_timeout="30" expected_votes=”3”/ -->
<!-- totem token=”31000”-->
<multicast addr="225.0.0.12"/>
<clusternodes>
<clusternode name="rac7-priv" nodeid="1" votes="1">
<multicast addr="225.0.0.12" interface="bond0"/>
<fence>
<method name="1">
<device name="jLO7"/>
</method>
</fence>
</clusternode>
<clusternode name="rac8-priv" nodeid="2" votes="1">
<multicast addr="225.0.0.12" interface="bond0"/>
<fence>
<method name="1">
<device name="jLO8"/>
</method>
</fence>
</clusternode>
<fencedevices>
<fencedevice agent="fence_ilo" hostname="192.168.1.7" login="rac" name="jLO7"
passwd="jeff123456"/>
<fencedevice agent="fence_ilo" hostname="192.168.1.8" login="rac" name="jLO8"
passwd="jeff123456"/>
</fencedevices>
rac7-priv and rac8-priv need to be resolved and therefore are included in all nodes' /etc/hosts file:
192.168.1.7 rac7-priv.example.com rac7-priv
192.168.1.8 rac8-priv.example.com rac8-priv
Note
init level to 2 in the /etc/inittab file, to aid node testing. If the configuration is broken and the node reboots back into init 3, the startup will hang, and this impedes debugging. Open a window and tail the /var/log/messages file to track your progress.
qdiskd service is the first service to start and is responsible for parsing the cluster.conf file. Any errors will appear in the /var/log/messages file and qdiskd will exit. If qdiskd starts up, then cman should be started next.
qdisk and cman services will start on boot:
$ sudo chkconfig –level 3 qdiskd on
$ sudo chkconfig –level 3 cman on
multipath.conf and cluster.conf configuration files to the second node to make things easier. Now the configuration process diverges to the point that further configuration is very RAC/GFS or HA specific. For information on configuring a RAC/GFS cluster, continue with Chapter 5, RAC/GFS Cluster Configuration. For information on configuring cold failover HA cluster, continue with Chapter 6, Cold Failover Cluster Configuration.
Chapter 5. RAC/GFS Cluster Configuration Link kopierenLink in die Zwischenablage kopiert!
ORA_CRS_HOME). The Oracle RDBMS product files (ORACLE_HOME) can be installed on shared GFS volumes, although Context Dependent Pathnames (CDPN) will be required for some ORACLE_HOME directories.
5.1. Oracle Clusterware Link kopierenLink in die Zwischenablage kopiert!
/dev/mapper file can now be used directly for the CRS Cluster Registry (OCR) and quorum (VOTE) files.
Note
option="off" value in the fence section of the cluster.conf file can be set to ensure nodes are manually restarted. (The option value can be set to "reboot", "on", or "off"; by default, the value is "reboot".)
Note
grub.conf file often continues a built-in 5-second delay for screen hold. Sometimes, every second counts.
5.1.1. Cluster Recovery Time Link kopierenLink in die Zwischenablage kopiert!
Note
5.2. Network Topology Link kopierenLink in die Zwischenablage kopiert!
<clusternode name="rac7-priv" nodeid="1" votes="1">
<multicast addr="225.0.0.12" interface="bond0"/>
<fence>
<method name="1">
<device name="jLO7"/>
</method>
</fence>
</clusternode>
<clusternode name="rac8-priv" nodeid="2" votes="1">
<multicast addr="225.0.0.12" interface="bond0"/>
<fence>
<method name="1">
<device name="jLO8"/>
</method>
</fence>
</clusternode>
5.3. GFS Configuration Link kopierenLink in die Zwischenablage kopiert!
/etc/lvm/lvm.conf file; you must set locking_type to 3:
# Type of locking to use. Defaults to local file-based locking (1).
# Turn locking off by setting to 0 (dangerous: risks metadata corruption
# if LVM2 commands get run concurrently).
# Type 2 uses the external shared library locking_library.
# Type 3 uses built-in clustered locking.
locking_type = 3
Note
fenced service has not started.
Note
Warning
gfs_grow command is used to expand the file system, once the LUN has been expanded. By keeping the filesystem mapping to single LUNs, it reduces an errors (or bugs) that might arise during gfs_grow operations. There is no performance difference between using the DDM inode, or subsequent CLVMD created logical volumes, built on these inodes. However, it must be stressed that you should perform a backup of your data before attempting this command as there is a potential to render you data unusable.
5.3.1. GFS File System Creation Link kopierenLink in die Zwischenablage kopiert!
$ sudo gfs_mkfs -r 512 -j 4 -p lock_dlm -t rac585:gg /dev/mapper/ohome
$ sudo gfs_mkfs-j 4 -p lock_dlm -t rac585:gg /dev/mapper/db
ORACLE_HOME, where the $OH/diag directory can contain thousands of trace files, spanning tens of GBs.
Note
HOME is not supported on GFS clustered volumes at this time. For most installations, this will not be an imposition. There are several advantages (including, async rolling upgrades) to placing ORA_CRS_HOME on the node’s local file system, and most customers follow this practice.
5.3.2. /etc/fstab Entries Link kopierenLink in die Zwischenablage kopiert!
/dev/mapper/ohome /mnt/ohome gfs _netdev 0 0
/dev/mapper/db /mnt/db gfs _netdev 0 0
_netdev mount option is also useful as it ensures the file systems are unmounted before cluster services shut down.
5.3.3. Context Dependent Pathnames (CDPN) Link kopierenLink in die Zwischenablage kopiert!
ORACLE_HOME ($OH) is located on a GFS clustered volume, certain directories need to appear the same to each node (including names of files, such as listener.ora), but have node-specific contents.
- Change to the
OH/networkdirectory:$ cd $OH/network - Create directories that correspond to the hostnames:
$ mkdir rac7 $ mkdir rac8 - Create the admin directory in each directory:
$ mkdir rac7/admin $ mkdir rac8/admin - Create the CPDN link (from each host).ON RAC7, in
$OH/network:$ ln –s @hostname adminOn RAC8, in$OH/network:$ ln –s @hostname admin
5.4. Oracle Settings and Suggestions Link kopierenLink in die Zwischenablage kopiert!
SGA_TARGET and FILESYSTEMIO_OPTIONS. Oracle performs more efficient I/O operations of the files on the GFS volumes are opened with DirectIO (DIO) and AsyncIO (AIO). This is accomplished using the filesystemio_options parameter:
filesystemio_options=setall
init.ora parameter must be decided upon first: db_block_size. The default db_block_size for Oracle on Linux is 8K. GFS uses 4K blocks (as does x64 hardware). Although 4K blocks will out-perform 8K blocks in GFS, other factors in the application might mask this effect. Application performance requirements take precedence, and do not change it unless you know what you are doing. It is not recommended that 2K blocks be used on GFS. Most customers leave it 8K.
5.4.1. RHEL Settings and Suggestions Link kopierenLink in die Zwischenablage kopiert!
/etc/sysctl.conf that involve shared memory, semaphores. Clusterware requires the network settings to be altered. These are documented in the Oracle Install Guide or release notes for that particular version.
5.4.2. Huge TLBs Link kopierenLink in die Zwischenablage kopiert!
Chapter 6. Cold Failover Cluster Configuration Link kopierenLink in die Zwischenablage kopiert!
Note
Note
6.1. Red Hat Cluster Suite HA Link kopierenLink in die Zwischenablage kopiert!
6.2. Red Hat Cluster Suite Timers Link kopierenLink in die Zwischenablage kopiert!
Note
<cluster config_version="11" name="dl585">
<fence_daemon clean_start="1" post_fail_delay="0"post_join_delay="3" />
<quorumd device="/dev/mapper/qdisk" interval="2" log_level="5" tko="8" votes="1" />
<cman expected_votes="3" two_node="0" />
<totem token="33000" />
tko parameter stands for Technical Knock Out — a boxing metaphor. The CMAN heartbeat timeouts must be more than two time the tko timeouts; we choose 33 seconds (value in ms). This delay gives the quorum daemon adequate time to establish which node is the master during a failure, or if there is a load spike that might delay voting. The expected_votes parameter is set to the number of nodes + 1.
6.3. RGManager Configuration Link kopierenLink in die Zwischenablage kopiert!
Note
oracledb.sh, found in /usr/share/cluster. The customer must always modify this script so that RGManager can identify the Oracle services that require cluster management. Oracle environment variables, such as ORACLE_HOME and ORACLE_SID are critical to this identification. Oracle will likely use several file system mount points, and all mounts that are required to successfully run the database must be made known to RGManager.
$ sudo chkconfig –level 3 rgmanager on
6.3.1. Customizing oracledb.sh Environment Variables Link kopierenLink in die Zwischenablage kopiert!
# (1) You can comment out the LOCKFILE declaration below. This will prevent
# the need for this script to access anything outside of the ORACLE_HOME
# path.
#
# (2) You MUST customize ORACLE_USER, ORACLE_HOME, ORACLE_SID, and
# ORACLE_HOSTNAME to match your installation if not running from within
# rgmanager.
#
# (3) Do NOT place this script in shared storage; place it in ORACLE_USER's
# home directory in non-clustered environments and /usr/share/cluster
# in rgmanager/Red Hat cluster environments.
oracledb.sh script might need to have all of this disabled, including references to these obsolete services in the start_db, stop_db and get_db_status functions in the script.
6.3.1.1. DB_PROCNAMES Link kopierenLink in die Zwischenablage kopiert!
6.3.1.2. LSNR_PROCNAME Link kopierenLink in die Zwischenablage kopiert!
6.3.2. Network VIP for Oracle Listeners Link kopierenLink in die Zwischenablage kopiert!
# ip addr add 192.168.1.20/24 dev eth0
rgmanager, and must be in the same submit as the public, or front-side physical network interfaces. The front-side network is the network the Listener uses, and clients will also have access.
<rm log_level="7">
<service domain="OracleHA" autostart=”1” exclusive=”1” name=”oracle11g” recovery=”relocate”>
<oracledb home="/ff/11g/db" name="ed" type="11g" user="jneedham" vhost="192.168.1.60"/>
<ip address="192.168.1.60" />
<fs device...
<rm/>
edb home="/ff/11g/db" name="ed" type="11g" user="jneedham" vhost="hacf-vip"/>
<ip address="hacf-vip" />
vhost argument must match the IP address clause in the service domain definition, and the /etc/hosts file must contain all the required addresses:
# Cold Failover VIP
192.168.1.60 hacf-vip
192.168.1.160 rac5
192.168.2.160 rac5-priv
192.168.2.5 rac5-jlo
192.168.1.161 rac6
192.168.2.161 rac6-priv
192.168.2.6 rac6-jlo
6.3.2.1. listener.ora Configuration Link kopierenLink in die Zwischenablage kopiert!
rgmanager package, but the functionality is determined by the Oracle configuration file, listener.ora. The bolded LISTENER tag in the file is the specific name of this listener instance. This is the default, but this can be changed, and often is, when there is more than 1 SQL*Net Listener service for this database.
LISTENER =
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=hacf-vip)(PORT=1521)) #1521 is too common
(ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY)))
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = ed) # Needs to match DBNAME in inited.ora
(ORACLE_HOME = /ff/11g/db)
(SID_NAME = ed) # Needs to match the instance’s ORACLE_SID
)
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /ff/11g/db)
(PROGRAM = extproc)
)
)
tnsnames.ora configuration file that contains an alias that directs them to the virtual IP. The location of the database instance is now transparent to clients.
rhcs11g=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hacf-vip)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME =ed) # This is the ORACLE_SID
)
)
Thin client driver. (More advanced JDBC connectivity does require an Oracle client install). For JDBC thin, the connection string cannot use the SQL*Net alias, but must encode the same information:
… getConnection (“jdbc:oracle:thin:@hacf-vip:1521:ed”, “scott”, “tiger”)
6.3.3. Files System Entries Link kopierenLink in die Zwischenablage kopiert!
<fs device="/dev/mapper/dbp5" force_unmount="1" fstype="ext3" mountpoint="/ff" name="ora_ff"/>
<fs device="/dev/mapper/dbp6" force_unmount="1" fstype="ext3" mountpoint="/gg" name="ora_gg"/>
<netfs host="F3040" export="/vol/ed" force_unmount="1" mountpoint="/mnt/ed" options="rw,hard,nointr,vers=3,rsize=32768,wsize=32768,actimeo=0,proto=tcp" name="ora_nfs_ed"/>
Note
actimeo and it should be set to zero to ensure the access times stay current.
Appendix A. Sample cluster.conf File Link kopierenLink in die Zwischenablage kopiert!
cluster.conf file for a two node cold failover configuration with power fencing via an APC power strip.
<?xml version="1.0"?>
<cluster config_version="1" name="HA585">
<fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/>
<quorumd interval="2" device="/dev/mapper/qdisk" tko="8" votes="1" log_level="7"/>
<cman expected_votes="3" two_node="0"/>
<totem token="33000"/>
<fencedevices>
<fencedevice agent="fence_apc" ipaddr="192.168.91.59" login="adminl" name="apc" passwd="password"/>
</fencedevices>
<clusternodes>
<clusternode name="ora11-priv" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="apc" option="off" switch="1" port="2"/>
</method>
</fence>
</clusternode>
<clusternode name="ora12-priv" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="apc" option="off" switch="1" port="5"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm log_level="7">
<service domain="OracleHA" autostart="1" exclusive="1" name="oracle11g" recovery="relocate">
<ip address="10.10.8.200"/>
<fs device="/dev/mapper/diskdp1" force_unmount="1" fstype="ext3" mountpoint="/diskd" name="diskd"/>
<oracledb home="/diskd/ora11gR1/db_1" name="oracledb" type="11g" user="oracle" vhost="10.10.8.200"/>
</service>
</rm>
</cluster>
Appendix B. Revision History Link kopierenLink in die Zwischenablage kopiert!
| Revision History | |||
|---|---|---|---|
| Revision 1-24.400 | 2013-10-31 | ||
| |||
| Revision 1-24 | 2012-07-18 | ||
| |||
| Revision 1.0-0 | Fri Jul 23 2010 | ||
| |||
Index Link kopierenLink in die Zwischenablage kopiert!
A
- actimeo mount option, Files System Entries
- application tier, Network Topology Overview
- ARCHIVELOG parameter, Storage Allocation
- auto-allocation, RHEL Server Base
- AUTOEXTEND parameter, Storage Allocation
B
- bonded public network interface, Public Network
C
- cluster recovery time, Cluster Recovery Time
- Cluster Suite network, Red Hat Cluster Suite Network
- Cluster Suite timeout settings, Red Hat Cluster Suite Timers
- cluster, two-node sample, Sample Two-Node Cluster
- cman service , Fencing Configuration, Red Hat Cluster Suite services
- context dependent pathnames (CDPN), Context Dependent Pathnames (CDPN)
- CSS network services, Oracle Clusterware
D
- datafile, Storage Allocation
- Device-Mapper Multipath (DM-Multipath), Storage Topology and DM-Multipath, Multipath Configuration
- configuration file, Multipath Configuration
E
- Enterprise Edition license, RAC/GFS Considerations
- ext3 file system, Oracle Enterprise Edition HA for Red Hat Cluster Suite, RHEL Server Base
F
- FCP switch infrastructure, Storage Topology
- fenced service, Red Hat Cluster Suite services, GFS Configuration
- fence_node command, Fencing Network
- fencing, Fencing Topology
- configuration, Fencing Configuration, Fencing Network
- power-managed, Cold Failover Cluster Configuration
- technologies, Fencing Configuration
- file system
- blocks-based, Storage Considerations
- files-based, Storage Considerations
- journal size, qdisk Configuration
- FILESYSTEMIO_OPTIONS tuning variable, Oracle Settings and Suggestions
- firewalls, RHEL Server Base
- Flash RAM card, Storage Considerations
- fstab file, /etc/fstab Entries
G
- GbE NIC, Server Node
- GCS protocol, Network Topology Overview
- GFS file system, GFS Configuration
- creation, GFS File System Creation
- growing, GFS Configuration
- gfs_grow command, GFS Configuration
- Global Cache Fusion (GCS), RAC/GFS Considerations
- Global Cache Services (GCS), see Oracle Cache Fusion
- Cache Fusion, Network Topology Overview
H
- heartbeat network, Network Topology Overview, Cold Failover Cluster Configuration
- HP Proliant DL585 server, Fencing Topology, Server Node, RHEL Server Base
I
- init.ora parameter, Oracle Settings and Suggestions
- Integrated Lights-Out Management (iLO), Fencing Topology, Server Node, Fencing Configuration, RHEL Server Base, Fencing Network
- IOPS math, Storage Considerations
- IP routing, virtual, RAC/GFS Considerations
- iSCSI technology, Storage Topology and DM-Multipath
J
- journal size, file system, qdisk Configuration
K
- kickstart file, RHEL Server Base
L
- license
- Enterprise Edition, RAC/GFS Considerations
- Real Application Clusters (RAC), RAC/GFS Considerations
- Link Aggregation Control Protocol (LACP), RAC/GFS Considerations
- listener.ora configuration file, listener.ora Configuration
- LUN, high-performance, Storage Topology
M
- Maximum Availability Architecture (MAA), Hardware Installation and Configuration
- modprobe.conf file, Red Hat Cluster Suite Network
- multipath.conf file, Multipath Configuration
N
- Native Command Queuing (NCQ), Storage Topology
- network
- private, Network Topology
- public, Network Topology
- topology, Network Topology Overview
- NICs
- dual-ported, Network Topology
- single-ported, Network Topology
- node testing, Red Hat Cluster Suite services
O
- Oracle
- Cache Fusion, Network Topology Overview
- Cache Fusion links, Network Topology Overview
- Global Cache Fusion (GCS), RAC/GFS Considerations
- Real Application Clusters, see Real Application Clusters, Oracle Real Application Clusters for Red Hat Cluster Suite and GFS
- Shared Global Area (SGA), Huge TLBs
- Oracle Enterprise Manager (OEM) console, Customizing oracledb.sh Environment Variables
- oracledb.sh script, RGManager Configuration
- ORACLE_HOME
- directory, Storage Allocation, RHEL Server Base, RAC/GFS Cluster Configuration, Context Dependent Pathnames (CDPN)
- environment variable, RGManager Configuration
- ORACLE_SID environment variable, RGManager Configuration
- ORA_CRS_HOME directory, Storage Allocation, RHEL Server Base, RAC/GFS Cluster Configuration, GFS File System Creation
P
- path_grouping_policy multipath parameter, Multipath Configuration
- power distribution unit (PDU), Fencing Configuration
- power-managed fencing, Cold Failover Cluster Configuration
- private network, Network Topology
- public network, Public Network, Network Topology
Q
- qdisk service, Red Hat Cluster Suite services
- qdisk, see quorum disk, Storage Allocation
- Queuing
- Native Command, Storage Topology
- Tagged, Storage Topology
- quorum disk (qdisk), Storage Allocation, Fencing Configuration, qdisk Configuration, Cold Failover Cluster Configuration, Red Hat Cluster Suite Timers
R
- RAC, see Real Application Clusters, Oracle Real Application Clusters for Red Hat Cluster Suite and GFS
- RAID set, Storage Topology
- RAID technology, Storage Considerations
- Real Application Clusters (RAC), Oracle Real Application Clusters for Red Hat Cluster Suite and GFS
- asymmetrical, Sample Two-Node Cluster
- certification requirements, Oracle Real Application Clusters for Red Hat Cluster Suite and GFS
- license, RAC/GFS Considerations
- symmetrical, Sample Two-Node Cluster
- recovery time, cluster, Cluster Recovery Time
- Reliable Data Sockets (RDS) protocol, RAC/GFS Considerations
- rgmanager package, Red Hat Cluster Suite HA
- RHEL server bse, RHEL Server Base
- RPM groups, installation, RHEL Server Base
S
- scsi_id command, Multipath Configuration
- SELINUX, RHEL Server Base
- Serial Access SCSI (SAS) drives, Storage Considerations
- Serial ATA (SATA), Storage Considerations, Storage Topology
- Serviceguard, HP, Oracle Enterprise Edition HA for Red Hat Cluster Suite
- SGS_TARGET tuning variable, Oracle Settings and Suggestions
- shared disk architecture, Oracle Real Application Clusters for Red Hat Cluster Suite and GFS
- Shared Global Area (SGA), Huge TLBs
- single-instance non-shared operation, Oracle Enterprise Edition HA for Red Hat Cluster Suite
- Solid State Drive (SSD), Storage Considerations
- SQL workloads, Sample Two-Node Cluster
- SQL*NET Network Listener, LSNR_PROCNAME
- storage considerations, Storage Considerations
- storage topology, Storage Topology
- sysctl.conf file, RHEL Settings and Suggestions
T
- tablespace, Storage Allocation
- Tagged Queuing, Storage Topology
- testing, node, Red Hat Cluster Suite services
- timeout settings, Cluster Suite, Red Hat Cluster Suite Timers
- tnsnames.ora configuration file, listener.ora Configuration
V
- virtual IP routing, RAC/GFS Considerations
- virtualization, Storage Allocation
- VLAN, Network Topology
W
- World-Wide Port Number (WWPN) mapping, HBA WWPN Mapping
- WWID, Multipath Configuration