15 安装clusterware软件
Note:
由于oracle 10g早在10年前,即2003年就发布了,而Redhat Enterprise Linux 5的发布日期要晚于oracle 10g发布,所以,默认情况下Oracle 10g 不支持在Redhat Enterprise Linux 5及以上版本上安装。所以,解决方案:
1 可以通过修改/etc/redhat-release 文件;
2 在安装的过程中,执行runInstaller 加上–ignoreSysPreReqs选项;
这里采取第一种方法。
修改前:
[root@node1 /]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.5 (Tikanga)
[root@node1 /]#
修改后:
[root@node1 /]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 4 (Tikanga)
[root@node1 /]#
将安装介质挂载到光驱上,然后挂载光盘到/mnt:
[root@node1 /]# mount /dev/cdrom /mnt/
mount: block device /dev/cdrom is write-protected, mounting read-only
[root@node1 /]#
Oracle用户登录图形界面:
根据提示,以root用户在双节点上执行rootpre.sh脚本:
[root@node1 rootpre]# pwd
/mnt/10201_clusterware_linux_x86_64/rootpre
[root@node1 rootpre]# sh rootpre.sh
No OraCM running
[root@node1 rootpre]#
继续安装:
提示错误,缺少libXp软件包,root用户到OS安装光盘中重新安装缺少的软件包:
[root@node1 media]# rpm -ivh libXp-1.0.0-8.1.el5.i386.rpm
warning: libXp-1.0.0-8.1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:libXp ########################################### [100%]
[root@node1 media]# rpm -ivh libXp-1.0.0-8.1.el5.x86_64.rpm
warning: libXp-1.0.0-8.1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:libXp ########################################### [100%]
[root@node1 media]#
Note:
在node2节点上也要安装缺失的软件包!
重新执行:
node1-> sh /mnt/10201_clusterware_linux_x86_64/runInstaller
来安装:
Next:
Next:
注意path指向ORA_CRS_HOME=/u01/app/oracle/product/10.2.0/crs_1,以及指定cluster_name=crs. Next:
添加节点2 信息:
修改eth0网卡作为public,Next:
指定OCR为外部冗余策略,并选择存放路径为/dev/raw/raw1,Next:
指定Voting disk为外部冗余策略,并选择存放路径为/dev/raw/raw2,Next:
Install开始安装:
Next:
Next:
根据提示执行以root用户分别在2个节点上执行/u01/app/oracle/oraInventory/orainstRoot.sh
和/u01/app/oracle/product/10.2.0/crs_1/root.sh脚本:
Node1:
[root@node1 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@node1 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh
WARNING: directory ‘/u01/app/oracle/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/u01/app/oracle/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Now formatting voting device: /dev/raw/raw2
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
node1
CSS is inactive on these nodes.
node2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@node1 ~]#
Node2:
[root@node2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@node2 ~]#
[root@node2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh
WARNING: directory ‘/u01/app/oracle/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/u01/app/oracle/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
node1
node2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/oracle/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
[root@node2 ~]#
Note:
解决上述错误,/u01/app/oracle/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory:
其实,这是1个bug:
a 在每个节点上,修改$ORA_CRS_HOME/bin目录下的srvctl和vipca文件,
在vipca文件ARGUMENTS=””165行之前和srvctl文件的export LD_ASSUME_KERNEL 174行之后增加 unset LD_ASSUME_KERNEL 语句;
修改后的$ORA_CRS_HOME/bin/vipca文件:
160 fi
161 export LD_LIBRARY_PATH
162 ;;
163 esac
164
165 unset LD_ASSUME_KERNEL
166 ARGUMENTS=””
167 NUMBER_OF_ARGUMENTS=$#
修改后的$ORA_CRS_HOME/bin/srvctl文件:
172 #Remove this workaround when the bug 3937317 is fixed
173 LD_ASSUME_KERNEL=2.4.19
174 export LD_ASSUME_KERNEL
175 unset LD_ASSUME_KERNEL
176
177 # Run ops control utility
178 $JRE $JRE_OPTIONS -classpath $CLASSPATH $TRACE oracle.ops.opsctl.OPSCTLDriver “$@”
179 exit $?
b 在任意一个节点上,用root用户,手动运行vipca,配置完正确的prvip和vip 信息之后,crs就可以安装完成。
Next:
填写对应的VIP别名:node1-vip.oracleonlinux.cn,node2-vip.oracleonlinux.cn:
Next:
Next:
然后,继续进行集群软件的安装:
此时单击,Retry:
最后,
完成clusterware软件的安装!!!
如下
node1-> crs_stat -t
Name Type Target State Host
————————————————————
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
node1->