此内容没有您所选择的语言版本。
Chapter 4. Software Installation and Configuration
A Cluster is a complex arrangement of bits and pieces that, once combined with the software configuration, produces a highly available platform for mission critical Oracle databases. We probably can’t repeat that often enough, but complexity is public enemy #1. Clusters, by definition, are complex. When clusters are poorly configured, they completely defeat the purpose for which they were originally sought: high availability.
The software components of a cluster combine with a particular set of hardware components to often produce a unique platform that could fail because it was not fully tested in this specific configuration. This is the just the reality of the modern enterprise. Under abnormal operation conditions (when you most want the cluster most to work), it is safe to say that no two clusters are alike in their ability to produce conditions that might cause instability. Do not assume that your unique combination of hardware and software has ever existed, let alone been tested in some mythical, multi-vendor testing lab. Torture it before you put it into production.
The steps outlined in this chapter assume one node at a time, and most of the process simply replicates to each node.
4.1. RHEL Server Base
Some customers install every last package onto an Oracle database server, because that simplifies their process. Some Oracle customers have been known to hand build kernels and delete every non-essential package, with everybody else in between.
For our sanity (and we hope, yours), we install the minimum set of RPM groups that are necessary to run Red Hat Cluster Suite and Oracle Enterprise Edition.
The following shows the kickstart file for:
HP Proliant server, with iLO, Storageworks controller, outboard e1000, and Qlogic 2300 series FCP HBA.
You should take the following into account when considering what sofware components to install.
- This example is an NFS-based install. As always, no two kickstart files are the same.
- Customers often use auto-allocation, which creates a single logical volume to create partitions. It is not necessary to separate the root directories into separate mounts. A 6GB root partition is probably overkill for an Oracle node. In either install configuration,
ORACLE_HOME
must be installed on an external LUN.ORA_CRS_HOME
(Oracle Clusterware for RAC/GFS) must be installed on a local partition on each node. The example below is from our RAC/GFS node. - Only the groups listed below are required. All other packages and groups are included at the customer’s discretion.
- SELINUX must be disabled for all releases of Oracle, except 11gR2.
- Firewalls are disabled, and not required (customer discretion).
- Deadline I/O scheduling is generally recommended, but some warehouse workloads might benefit from other algorithms.
device scsi cciss device scsi qla2300 install nfs --server=192.168.1.212 --dir=/vol/ed/jneedham/ISO/RHEL5/U3/64 reboot yes lang en_US.UTF-8 keyboard us network --device eth0 --bootproto=static --device=eth0 --gateway=192.168.1.1 --ip=192.168.1.1 --ip=192.168.1.114 --nameserver=139.95.251.1 --netmask=255.255.255.0 --onboot=on rootpw "oracleha" authconfig --enableshadow --enablemd5 selinux --disabled firewall --disabled --port=22:tcp timezone --utc America/Vancouver bootloader --location=m br --driveorder=cciss/c0d0 --append="elevator=deadline" # P A R T I T I O N S P E C I F I C A T I O N part swap --fstype swap --ondisk=cciss/c0d0 --usepart=cciss/c0d0p2 --size=16384 --asprimary part / --fstype ext3 --ondisk=cciss/c0d0 --usepart=cciss/c0d0p3 --size=6144 --asprimary part /ee --fstype ext3 --ondisk=cciss/c0d0 --usepart=cciss/c0d0p5 --noformat --size 32768 %packages @development-libs @x-software-development @core @base @legacy-software-development @java @legacy-software-support @base-x @development-tools @cluster-storage @clustering sysstat
Following installation, we often disable many of the services in the
/etc/rc3.d
file. Most of these services are not required when the server is configured for Oracle use.
Note
ext3 file systems that are created during an install do not have the maximum journal size. For RAC/GFS nodes, where
ORA_CRS_HOME
must live on this mount, we recommend that you rebuild the file system with the maximum journal size:
$ mke2fs –j –J size=400 /dev/cciss/cod0p5
Oracle Clusterware can churn a file system, so larger journals and a local RAID algorithm that favors performance will be beneficial.