在上一步2.13节中,我们已经对RAC双节点已经配置好了共享磁盘,接下来需要将这些共享磁盘格式化、然后用asmlib将其配置为ASM磁盘,用于将来存放OCR、Voting Disk和数据库用。
注意:只需在其中1个节点上格式化就可以,接下来我们选择在node1节点上格式化。
这里我们以asmlib软件来创建ASM磁盘,而不使用raw disk,而且从11gR2开始,OUI的图形界面已经不再支持raw disk。
① 以root用户分别在两个节点上执行fdisk命令,查看现有硬盘分区信息:
node1:
[root@node1 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2163 17374266 83 Linux
/dev/sda2 2164 2609 3582495 82 Linux swap / Solaris
Disk /dev/sdb: 524 MB, 524288000 bytes
64 heads, 32 sectors/track, 500 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 524 MB, 524288000 bytes
64 heads, 32 sectors/track, 500 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
[root@node1 ~]#
node2:
[root@node2 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2163 17374266 83 Linux
/dev/sda2 2164 2609 3582495 82 Linux swap / Solaris
Disk /dev/sdb: 524 MB, 524288000 bytes
64 heads, 32 sectors/track, 500 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 524 MB, 524288000 bytes
64 heads, 32 sectors/track, 500 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
[root@node2 ~]#
从上,我们可以看到目前两个节点上的分区信息一致:其中/dev/sda是用于存放操作系统的,/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde这4块盘都没有分区信息,这是我们在上一步2.13节中配置的4块共享磁盘。
② root用户在node1上格式化/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde这4块盘
[root@node1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-500, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-500, default 500):
Using default value 500
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@node1 ~]#
说明:fdisk /dev/sdb表示要对/dev/sdb磁盘进行格式化,其中,输入的命令分别表示:
n表示新建1个分区;
p表示分区类型选择为primary partition 主分区;
1表示分区编号从1开始;
起始、终止柱面选择默认值,即1和500;
w表示将新建的分区信息写入硬盘分区表。
③ 重复上述步骤②,以root用户在node1上分别格式化其余3块磁盘:
④ 格式化完毕之后,在node1,node2节点上分别看到下述信息:
node1:
[root@node1 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2163 17374266 83 Linux
/dev/sda2 2164 2609 3582495 82 Linux swap / Solaris
Disk /dev/sdb: 524 MB, 524288000 bytes
64 heads, 32 sectors/track, 500 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 500 511984 83 Linux
Disk /dev/sdc: 524 MB, 524288000 bytes
64 heads, 32 sectors/track, 500 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 500 511984 83 Linux
Disk /dev/sdd: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 391 3140676 83 Linux
Disk /dev/sde: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 391 3140676 83 Linux
[root@node1 ~]#
node2:
[root@node2 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2163 17374266 83 Linux
/dev/sda2 2164 2609 3582495 82 Linux swap / Solaris
Disk /dev/sdb: 524 MB, 524288000 bytes
64 heads, 32 sectors/track, 500 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 500 511984 83 Linux
Disk /dev/sdc: 524 MB, 524288000 bytes
64 heads, 32 sectors/track, 500 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 500 511984 83 Linux
Disk /dev/sdd: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 391 3140676 83 Linux
Disk /dev/sde: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 391 3140676 83 Linux
[root@node2 ~]#
至此,格式化共享磁盘完毕。
2.14.2 在两个节点上安装ASM RPM软件包
在安装ASM软件包时,要注意选择的软件包要与操作系统平台、内核版本选择一致。ASM软件包可以到Oracle官网下载。
node1安装:
[root@node1 ~]# rpm -qa|grep asm
用上述命令,并未发现节点1上安装任何asm软件包。
[root@node1 ~]# cd asm_rpm/
[root@node1 asm_rpm]# ll
total 136
-rw-r–r– 1 root root 25977 Apr 26 11:19 oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm
-rw-r–r– 1 root root 14176 Apr 26 11:19 oracleasmlib-2.0.4-1.el5.x86_64.rpm
-rw-r–r– 1 root root 89027 Apr 26 11:19 oracleasm-support-2.1.3-1.el5.x86_64.rpm
[root@node1 asm_rpm]# rpm -ivh oracleasm-support-2.1.3-1.el5.x86_64.rpm
warning: oracleasm-support-2.1.3-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:oracleasm-support ########################################### [100%]
[root@node1 asm_rpm]# rpm -ivh oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm
warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:oracleasm-2.6.18-194.el########################################### [100%]
[root@node1 asm_rpm]# rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm
warning: oracleasmlib-2.0.4-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:oracleasmlib ########################################### [100%]
[root@node1 asm_rpm]# rpm -qa|grep asm
oracleasm-2.6.18-194.el5-2.0.5-1.el5
oracleasm-support-2.1.3-1.el5
oracleasmlib-2.0.4-1.el5
[root@node1 asm_rpm]#
node2 安装:
[root@node2 asm_rpm]# ll
total 136
-rw-r–r– 1 root root 25977 Apr 26 11:20 oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm
-rw-r–r– 1 root root 14176 Apr 26 11:20 oracleasmlib-2.0.4-1.el5.x86_64.rpm
-rw-r–r– 1 root root 89027 Apr 26 11:20 oracleasm-support-2.1.3-1.el5.x86_64.rpm
[root@node2 asm_rpm]# rpm -qa|grep asm
[root@node2 asm_rpm]# rpm -ivh oracleasm-support-2.1.3-1.el5.x86_64.rpm
warning: oracleasm-support-2.1.3-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:oracleasm-support ########################################### [100%]
[root@node2 asm_rpm]# rpm -ivh oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm
warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:oracleasm-2.6.18-194.el########################################### [100%]
[root@node2 asm_rpm]# rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm
warning: oracleasmlib-2.0.4-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:oracleasmlib ########################################### [100%]
[root@node2 asm_rpm]# rpm -qa|grep asm
oracleasmlib-2.0.4-1.el5
oracleasm-support-2.1.3-1.el5
oracleasm-2.6.18-194.el5-2.0.5-1.el5
[root@node2 asm_rpm]#
说明:安装上述3个ASM RPM软件包时要先安装oracleasm-support-2.1.3-1.el5软件包,其次安装oracleasm-2.6.18-194.el5-2.0.5-1.el5软件包,最后安装oracleasmlib-2.0.4-1.el5软件包。
安装完毕后,执行 rpm -qa|grep asm确认是否安装成功。
2.14.3 配置ASM driver服务
在node1上以root用户进行配置。在安装完上述步骤2.14.2节中的3个ASM RPM软件包之后,可以通过执行/usr/sbin/oracleasm命令来进行配置,也可以通过执行/etc/init.d/oracleasm命令来进行配置,后者命令是Oracle 10g中进行ASM配置的命令,Oracle推荐执行前者命令,不过后者命令保留使用。
① 查看ASM服务状态:
[root@node1 ~]# /usr/sbin/oracleasm status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
[root@node1 ~]#
看到,默认情况下ASM服务并未开启。具体命令和相关参数可以直接执行下述命令来获取:
[root@node1 ~]# /usr/sbin/oracleasm -h
Usage: oracleasm [–exec-path=<exec_path>] <command> [ <args> ]
oracleasm –exec-path
oracleasm -h
oracleasm -V
The basic oracleasm commands are:
configure Configure the Oracle Linux ASMLib driver
init Load and initialize the ASMLib driver
exit Stop the ASMLib driver
scandisks Scan the system for Oracle ASMLib disks
status Display the status of the Oracle ASMLib driver
listdisks List known Oracle ASMLib disks
querydisk Determine if a disk belongs to Oracle ASMlib
createdisk Allocate a device for Oracle ASMLib use
deletedisk Return a device to the operating system
renamedisk Change the label of an Oracle ASMlib disk
update-driver Download the latest ASMLib driver
[root@node1 ~]#
② 配置ASM服务:
[root@node1 ~]# /usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@node1 ~]# /usr/sbin/oracleasm status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
[root@node1 ~]# /usr/sbin/oracleasm init
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@node1 ~]# /usr/sbin/oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
[root@node1 ~]#
说明:/usr/sbin/oracleasm configure -i命令进行配置时,用户配置为grid,组为asmadmin,启动ASM library driver驱动服务,并且将其配置为随着操作系统的启动而自动启动。
配置完成后,记得执行 /usr/sbin/oracleasm init命令来加载oracleasm内核模块。
③ 在node2上执行上述步骤②,完成ASM服务配置。
2.14.4 配置ASM磁盘
我们安装ASM RPM软件包,配置ASM 驱动服务的最终目的是要创建ASM磁盘,为将来安装grid软件、创建Oracle数据库提供存储。
说明:只需在一个节点上创建ASM磁盘即可!创建完之后,在其它节点上执行/usr/sbin/oracleasm scandisks之后,就可看到ASM磁盘。
接下来,开始创建ASM磁盘:
① 执行/usr/sbin/oracleasm createdisk来创建ASM磁盘
[root@node1 ~]# /usr/sbin/oracleasm listdisks
[root@node1 ~]# /usr/sbin/oracleasm createdisk -h
Usage: oracleasm-createdisk [-l <manager>] [-v] <label> <device>
[root@node1 ~]# /usr/sbin/oracleasm createdisk VOL1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@node1 ~]# /usr/sbin/oracleasm createdisk VOL2 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@node1 ~]# /usr/sbin/oracleasm createdisk VOL3 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@node1 ~]# /usr/sbin/oracleasm createdisk VOL4 /dev/sde1
Writing disk header: done
Instantiating disk: done
[root@node1 ~]# /usr/sbin/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
[root@node1 ~]#
从上看到,创建出来4块ASM磁盘。此时,node2上还看不到刚创建的ASM磁盘。
② node2 执行/usr/sbin/oracleasm scandisks扫描磁盘
[root@node2 ~]# /usr/sbin/oracleasm listdisks
[root@node2 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…
Instantiating disk "VOL1"
Instantiating disk "VOL2"
Instantiating disk "VOL3"
Instantiating disk "VOL4"
[root@node2 ~]# /usr/sbin/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
[root@node2 ~]#
③ 如何确定ASM磁盘同物理磁盘之间的对应关系?
[root@node1 ~]# /usr/sbin/oracleasm querydisk /dev/sd*
Device "/dev/sda" is not marked as an ASM disk
Device "/dev/sda1" is not marked as an ASM disk
Device "/dev/sda2" is not marked as an ASM disk
Device "/dev/sdb" is not marked as an ASM disk
Device "/dev/sdb1" is marked an ASM disk with the label "VOL1"
Device "/dev/sdc" is not marked as an ASM disk
Device "/dev/sdc1" is marked an ASM disk with the label "VOL2"
Device "/dev/sdd" is not marked as an ASM disk
Device "/dev/sdd1" is marked an ASM disk with the label "VOL3"
Device "/dev/sde" is not marked as an ASM disk
Device "/dev/sde1" is marked an ASM disk with the label "VOL4"
[root@node1 ~]#
至此,ASM磁盘准备工作已经完成!
在获取开篇1.2节中提到的安装介质如下:
[root@node1 ~]# ls -l
total 3401724
-rw——- 1 root root 1376 Apr 20 14:05 anaconda-ks.cfg
drwxr-xr-x 2 root root 4096 Apr 26 11:19 asm_rpm
-rw-r–r– 1 root root 51217 Apr 20 14:05 install.log
-rw-r–r– 1 root root 4077 Apr 20 14:05 install.log.syslog
-rw-r–r– 1 root root 1358454646 Apr 20 16:22 p10404530_112030_Linux-x86-64_1of7.zip
-rw-r–r– 1 root root 1142195302 Apr 20 16:29 p10404530_112030_Linux-x86-64_2of7.zip
-rw-r–r– 1 root root 979195792 Apr 20 17:07 p10404530_112030_Linux-x86-64_3of7.zip
drwxr-xr-x 2 root root 4096 Apr 24 10:17 shell
[root@node1 ~]#
其中:
p10404530_112030_Linux-x86-64_1of7.zip和
p10404530_112030_Linux-x86-64_2of7.zip
是Oracle软件的安装介质。
p10404530_112030_Linux-x86-64_3of7.zip是GRID软件的安装介质。
注意:这里的3个软件包均是来源于MetaLink网站,其版本均是目前Oracle 11g的最新版本,11.2.0.3.0。如果没有MetaLink账号的话,也可以从oracle官方网站免费获取11.2.0.1.0的版本软件来进行安装和配置。
我们通过下述命令来解压上述3个压缩软件包:
[root@node1 ~]# unzip p10404530_112030_Linux-x86-64_1of7.zip
[root@node1 ~]# unzip p10404530_112030_Linux-x86-64_2of7.zip
[root@node1 ~]# unzip p10404530_112030_Linux-x86-64_3of7.zip
解压之后,信息如下:
[root@node1 ~]# ls -l
total 3401724
-rw——- 1 root root 1376 Apr 20 14:05 anaconda-ks.cfg
drwxr-xr-x 2 root root 4096 Apr 26 11:19 asm_rpm
drwxr-xr-x 8 root root 4096 Sep 22 2011 database
drwxr-xr-x 8 root root 4096 Sep 22 2011 grid
-rw-r–r– 1 root root 51217 Apr 20 14:05 install.log
-rw-r–r– 1 root root 4077 Apr 20 14:05 install.log.syslog
-rw-r–r– 1 root root 1358454646 Apr 20 16:22 p10404530_112030_Linux-x86-64_1of7.zip
-rw-r–r– 1 root root 1142195302 Apr 20 16:29 p10404530_112030_Linux-x86-64_2of7.zip
-rw-r–r– 1 root root 979195792 Apr 20 17:07 p10404530_112030_Linux-x86-64_3of7.zip
drwxr-xr-x 2 root root 4096 Apr 24 10:17 shell
[root@node1 ~]# du -sh database/
2.5G database/
[root@node1 ~]# du -sh grid/
1.1G grid/
[root@node1 ~]#
可以看到,数据库的安装文件2.5G大小,GRID软件的安装1.1GB。
为便于将来安装软件,分别将其move到oracle用户和grid用户的家目录:
[root@node1 ~]# mv database/ /home/oracle/
[root@node1 ~]# mv grid/ /home/grid/
[root@node1 ~]#
在安装 GRID之前,建议先利用CVU(Cluster Verification Utility)检查 CRS的安装前环境。
① 使用 CVU 检查CRS的安装前环境:
[root@node1 ~]# su – grid
node1-> pwd
/home/grid
node1-> ls
Desktop grid
node1-> cd grid/
node1-> ll
total 72
drwxr-xr-x 9 root root 4096 Sep 22 2011 doc
drwxr-xr-x 4 root root 4096 Sep 22 2011 install
-rwxr-xr-x 1 root root 28122 Sep 22 2011 readme.html
drwxr-xr-x 2 root root 4096 Sep 22 2011 response
drwxr-xr-x 2 root root 4096 Sep 22 2011 rpm
-rwxr-xr-x 1 root root 4878 Sep 22 2011 runcluvfy.sh
-rwxr-xr-x 1 root root 3227 Sep 22 2011 runInstaller
drwxr-xr-x 2 root root 4096 Sep 22 2011 sshsetup
drwxr-xr-x 14 root root 4096 Sep 22 2011 stage
-rwxr-xr-x 1 root root 4326 Sep 2 2011 welcome.html
node1-> ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose
Performing pre-checks for cluster services setup
Checking node reachability…
Check: Node reachability from node "node1"
Destination Node Reachable?
———————————— ————————
node1 yes
node2 yes
Result: Node reachability check passed from node "node1"
Checking user equivalence…
Check: User equivalence for user "grid"
Node Name Status
———————————— ————————
node2 passed
node1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity…
Checking hosts config file…
Node Name Status
———————————— ————————
node2 passed
node1 passed
Verification of the hosts config file successful
Interface information for node "node2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
—— ————— ————— ————— ————— —————– ——
eth0 172.16.0.192 172.16.0.0 0.0.0.0 172.16.15.254 00:0C:29:00:42:89 1500
eth1 192.168.94.12 192.168.94.0 0.0.0.0 172.16.15.254 00:0C:29:00:42:93 1500
Interface information for node "node1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
—— ————— ————— ————— ————— —————– ——
eth0 172.16.0.191 172.16.0.0 0.0.0.0 172.16.15.254 00:0C:29:A2:AE:1F 1500
eth1 192.168.94.11 192.168.94.0 0.0.0.0 172.16.15.254 00:0C:29:A2:AE:29 1500
Check: Node connectivity of subnet "172.16.0.0"
Source Destination Connected?
—————————— —————————— —————-
node2[172.16.0.192] node1[172.16.0.191] yes
Result: Node connectivity passed for subnet "172.16.0.0" with node(s) node2,node1
Check: TCP connectivity of subnet "172.16.0.0"
Source Destination Connected?
—————————— —————————— —————-
node1:172.16.0.191 node2:172.16.0.192 passed
Result: TCP connectivity check passed for subnet "172.16.0.0"
Check: Node connectivity of subnet "192.168.94.0"
Source Destination Connected?
—————————— —————————— —————-
node2[192.168.94.12] node1[192.168.94.11] yes
Result: Node connectivity passed for subnet "192.168.94.0" with node(s) node2,node1
Check: TCP connectivity of subnet "192.168.94.0"
Source Destination Connected?
—————————— —————————— —————-
node1:192.168.94.11 node2:192.168.94.12 passed
Result: TCP connectivity check passed for subnet "192.168.94.0"
Interfaces found on subnet "172.16.0.0" that are likely candidates for VIP are:
node2 eth0:172.16.0.192
node1 eth0:172.16.0.191
Interfaces found on subnet "192.168.94.0" that are likely candidates for a private interconnect are:
node2 eth1:192.168.94.12
node1 eth1:192.168.94.11
Checking subnet mask consistency…
Subnet mask consistency check passed for subnet "172.16.0.0".
Subnet mask consistency check passed for subnet "192.168.94.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication…
Checking subnet "172.16.0.0" for multicast communication with multicast group "230.0.1.0"…
Check of subnet "172.16.0.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.94.0" for multicast communication with multicast group "230.0.1.0"…
Check of subnet "192.168.94.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Node Name Status
———————————— ————————
node2 passed
node1 passed
Result: Check for ASMLib configuration passed.
Check: Total memory
Node Name Available Required Status
———— ———————— ———————— ———-
node2 1.9641GB (2059516.0KB) 1.5GB (1572864.0KB) passed
node1 1.9641GB (2059516.0KB) 1.5GB (1572864.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Status
———— ———————— ———————— ———-
node2 1.8744GB (1965456.0KB) 50MB (51200.0KB) passed
node1 1.7501GB (1835088.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
———— ———————— ———————— ———-
node2 3.4165GB (3582484.0KB) 2.9462GB (3089274.0KB) passed
node1 3.4165GB (3582484.0KB) 2.9462GB (3089274.0KB) passed
Result: Swap space check passed
Check: Free disk space for "node2:/tmp"
Path Node Name Mount point Available Required Status
—————- ———— ———— ———— ———— ————
/tmp node2 / 13.0361GB 1GB passed
Result: Free disk space check passed for "node2:/tmp"
Check: Free disk space for "node1:/tmp"
Path Node Name Mount point Available Required Status
—————- ———— ———— ———— ———— ————
/tmp node1 / 5.874GB 1GB passed
Result: Free disk space check passed for "node1:/tmp"
Check: User existence for "grid"
Node Name Status Comment
———— ———————— ————————
node2 passed exists(1100)
node1 passed exists(1100)
Checking for multiple users with UID value 1100
Result: Check for multiple users with UID value 1100 passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
Node Name Status Comment
———— ———————— ————————
node2 passed exists
node1 passed exists
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba"
Node Name Status Comment
———— ———————— ————————
node2 passed exists
node1 passed exists
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
—————- ———— ———— ———— ———— ————
node2 yes yes yes yes passed
node1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Status
—————- ———— ———— ———— —————-
node2 yes yes no failed
node1 yes yes no failed
Result: Membership check for user "grid" in group "dba" failed
Check: Run level
Node Name run level Required Status
———— ———————— ———————— ———-
node2 5 3,5 passed
node1 5 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
—————- ———— ———— ———— —————-
node2 hard 65536 65536 passed
node1 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
—————- ———— ———— ———— —————-
node2 soft 1024 1024 passed
node1 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
—————- ———— ———— ———— —————-
node2 hard 16384 16384 passed
node1 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
—————- ———— ———— ———— —————-
node2 soft 2047 2047 passed
node1 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
Node Name Available Required Status
———— ———————— ———————— ———-
node2 x86_64 x86_64 passed
node1 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Status
———— ———————— ———————— ———-
node2 2.6.18-194.el5 2.6.18 passed
node1 2.6.18-194.el5 2.6.18 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 250 250 250 passed
node1 250 250 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 32000 32000 32000 passed
node1 32000 32000 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 100 100 100 passed
node1 100 100 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 128 128 128 passed
node1 128 128 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 1054472192 1054472192 1054472192 passed
node1 1054472192 1054472192 1054472192 passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 4096 4096 4096 passed
node1 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 2097152 2097152 2097152 passed
node1 2097152 2097152 2097152 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 6815744 6815744 6815744 passed
node1 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
node1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 262144 262144 262144 passed
node1 262144 262144 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 4194304 4194304 4194304 passed
node1 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 262144 262144 262144 passed
node1 262144 262144 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 1048586 1048586 1048576 passed
node1 1048586 1048586 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
—————- ———— ———— ———— ———— ————
node2 1048576 1048576 1048576 passed
node1 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "make"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 make-3.81-3.el5 make-3.81 passed
node1 make-3.81-3.el5 make-3.81 passed
Result: Package existence check passed for "make"
Check: Package existence for "binutils"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6 passed
node1 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6 passed
Result: Package existence check passed for "binutils"
Check: Package existence for "gcc(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 gcc(x86_64)-4.1.2-48.el5 gcc(x86_64)-4.1.2 passed
node1 gcc(x86_64)-4.1.2-48.el5 gcc(x86_64)-4.1.2 passed
Result: Package existence check passed for "gcc(x86_64)"
Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed
node1 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 glibc(x86_64)-2.5-49 glibc(x86_64)-2.5-24 passed
node1 glibc(x86_64)-2.5-49 glibc(x86_64)-2.5-24 passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed
node1 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "elfutils-libelf(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed
node1 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed
Result: Package existence check passed for "elfutils-libelf(x86_64)"
Check: Package existence for "elfutils-libelf-devel"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed
node1 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed
Result: Package existence check passed for "elfutils-libelf-devel"
Check: Package existence for "glibc-common"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 glibc-common-2.5-49 glibc-common-2.5 passed
node1 glibc-common-2.5-49 glibc-common-2.5 passed
Result: Package existence check passed for "glibc-common"
Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 glibc-devel(x86_64)-2.5-49 glibc-devel(x86_64)-2.5 passed
node1 glibc-devel(x86_64)-2.5-49 glibc-devel(x86_64)-2.5 passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "glibc-headers"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 glibc-headers-2.5-49 glibc-headers-2.5 passed
node1 glibc-headers-2.5-49 glibc-headers-2.5 passed
Result: Package existence check passed for "glibc-headers"
Check: Package existence for "gcc-c++(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 gcc-c++(x86_64)-4.1.2-48.el5 gcc-c++(x86_64)-4.1.2 passed
node1 gcc-c++(x86_64)-4.1.2-48.el5 gcc-c++(x86_64)-4.1.2 passed
Result: Package existence check passed for "gcc-c++(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed
node1 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 libgcc(x86_64)-4.1.2-48.el5 libgcc(x86_64)-4.1.2 passed
node1 libgcc(x86_64)-4.1.2-48.el5 libgcc(x86_64)-4.1.2 passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 libstdc++(x86_64)-4.1.2-48.el5 libstdc++(x86_64)-4.1.2 passed
node1 libstdc++(x86_64)-4.1.2-48.el5 libstdc++(x86_64)-4.1.2 passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 libstdc++-devel(x86_64)-4.1.2-48.el5 libstdc++-devel(x86_64)-4.1.2 passed
node1 libstdc++-devel(x86_64)-4.1.2-48.el5 libstdc++-devel(x86_64)-4.1.2 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 sysstat-7.0.2-3.el5 sysstat-7.0.2 passed
node1 sysstat-7.0.2-3.el5 sysstat-7.0.2 passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "ksh"
Node Name Available Required Status
———— ———————— ———————— ———-
node2 ksh-20100202-1.el5 ksh-20060214 passed
node1 ksh-20100202-1.el5 ksh-20060214 passed
Result: Package existence check passed for "ksh"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
Node Name Status
———————————— ————————
node2 passed
node1 passed
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)…
NTP Configuration file check started…
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time
Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking Core file name pattern consistency…
Core file name pattern consistency check passed.
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
———— ———————— ————————
node2 passed does not exist
node1 passed does not exist
Result: User "grid" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
———— ———————— ———————— ———-
node2 0022 0022 passed
node1 0022 0022 passed
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes…
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes…
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "node2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
———————————— ————————
node2 passed
node1 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Check: Time zone consistency
Result: Time zone consistency check passed
Fixup information has been generated for following node(s):
node2,node1
Please run the following script on each node as "root" user to execute the fixups:
'/tmp/CVU_11.2.0.3.0_grid/runfixup.sh'
Pre-check for cluster services setup was unsuccessful on all the nodes.
node1->
从上面的预检查结果中,可以看到不成功,其实错误的原因是grid用户不属于dba组!不过,Oracle自动给我们提供的修复的脚本,根据上述提示,分别以root用户在两个节点上执行/tmp/CVU_11.2.0.3.0_grid/runfixup.sh脚本来修复。
node1:
[root@node1 ~]# id grid
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
[root@node1 ~]#
看到,grid的确不属于dba组。执行脚本,进行修复:
[root@node1 ~]# sh /tmp/CVU_11.2.0.3.0_grid/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.3.0_grid/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.3.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.3.0_grid/orarun.log
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
[root@node1 ~]# id grid
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
[root@node1 ~]#
同样,在node2上执行上述脚本:
[root@node2 ~]# sh /tmp/CVU_11.2.0.3.0_grid/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.3.0_grid/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.3.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.3.0_grid/orarun.log
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
[root@node2 ~]# id grid
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
[root@node2 ~]#
执行完修复脚本后,重新执行预检查:
node1-> ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose
…
…
…
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes…
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes…
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "node2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
———————————— ————————
node2 passed
node1 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Check: Time zone consistency
Result: Time zone consistency check passed
Pre-check for cluster services setup was successful.
node1->
直到此步骤,我们的安装环境已经完全准备OK!!!
评论 (9)
jason| 2012年7月7日
没有2.13这一节啊?
Asher| 2012年7月7日
To jason:
多谢你的细心阅读和发现我的疏漏,已经补充到上一篇文章系列3里了,
http://www.oracleonlinux.cn/2012/06/step-by-step-install-11gr2-rac-on-linux-3/
lkl_1981| 2013年5月8日
你好,
请问我在使用的您的安装时在 asm scandisk 是报错了
[root@node2 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…
Instantiating disk "VOL1"
Unable to instantiate disk "VOL1"
Instantiating disk "VOL2"
Unable to instantiate disk "VOL2"
Instantiating disk "VOL3"
Unable to instantiate disk "VOL3"
Instantiating disk "VOL4"
Unable to instantiate disk "VOL4"
请问这个是什么原因引起的?
我用的virualbox,分区什么的都没有问题。
admin| 2013年5月8日
node1上执行成功了吗?
做RAC时,磁盘必须是共享的。
还有操作时,要严格遵循asm创建的步骤。
不行的话,deletedisk,然后尝试重新创建1次?
lkl_1981| 2013年5月9日
谢谢,我知道是什么原因了。我没有在我的node2上执行 /usr/sbin/oracleasm init
这个应该是加载内核中对asm支持的模块的吧?
admin| 2013年5月9日
是的。搭建的时候,要把思路整理清楚,不可遗漏某些重要的操作步骤。
aiker| 2013年5月12日
学习一下
bolangfeng| 2015年12月25日
黄老师你好,麻烦问下,利用node1做DNS服务器,参照《2.3.1 配置DNS服务器》配置完成后,按照《2.3.2 测试DNS服务器解析SCAN IP正常》测试node1,node2都正常;但是到了《使用 CVU 检查CRS的安装前环境》Checking DNS response time for an unreachable node部分,node1成功,node2失败了,具体信息如下:
Checking DNS response time for an unreachable node
Node Name Status
———————————— ————————
node2 failed
node1 passed
PRVF-5636 : The DNS response time for an unreachable node exceeded “15000” ms on following nodes: node2
File “/etc/resolv.conf” is not consistent across nodes
Check: Time zone consistency
Result: Time zone consistency check passed
请问怎么解决?谢谢~
相关软件版本均与文档对应的版本一致
admin| 2015年12月26日
@bolangfeng
你好,我觉得这种情况下可以忽略。因为,本来node1,node2关于DNS的配置文件就不一样,一个指向自身,一个指向另一个节点。先不管它,继续进行后面的工作。