解决一则ORA-00600[kcratr_nab_less_than_odr]的故障

       临近中午,收到某客户生产库的故障请求:数据库出现了ORA-00600的错误!现在数据库启不来,影响业务的运行。

       故障场景描述:这是我之前服务过的一家客户,是一套跑在Linux机器上的11gR2单实例数据库,由于服务器异常断电导致的故障!印象较为深刻的是就在2011年2月14日情人节那天,这个库也曾出现过另外一则ORA-00600的错误。较为不幸的是,就在今天又遭遇到这个ORA-600的错误!

        下面记录一下这次故障的处理过程:

        1 启库时遭遇ORA-600的错误

[oracle@os5 ~]$ uname -rm
2.6.18-128.el5PAE i686
[oracle@os5 ~]$ ps -ef|grep asm_
oracle    9099  9068  0 10:54 pts/8    00:00:00 grep asm_
[oracle@os5 ~]$ ps -ef|grep ora_
oracle    4797     1  0 10:52 ?        00:00:00 ora_pmon_database
oracle    4799     1  0 10:52 ?        00:00:00 ora_vktm_database
oracle    4803     1  0 10:52 ?        00:00:00 ora_gen0_database
oracle    4805     1  0 10:52 ?        00:00:00 ora_diag_database
oracle    4807     1  0 10:52 ?        00:00:00 ora_dbrm_database
oracle    4809     1  0 10:52 ?        00:00:00 ora_psp0_database
oracle    4811     1  0 10:52 ?        00:00:01 ora_dia0_database
oracle    4813     1  0 10:52 ?        00:00:00 ora_mman_database
oracle    4815     1  0 10:52 ?        00:00:00 ora_dbw0_database
oracle    4817     1  0 10:52 ?        00:00:00 ora_lgwr_database
oracle    4819     1  0 10:52 ?        00:00:00 ora_ckpt_database
oracle    4821     1  0 10:52 ?        00:00:00 ora_smon_database
oracle    4823     1  0 10:52 ?        00:00:00 ora_reco_database
oracle    4825     1  0 10:52 ?        00:00:00 ora_mmon_database
oracle    4827     1  0 10:52 ?        00:00:00 ora_mmnl_database
oracle    4829     1  0 10:52 ?        00:00:00 ora_d000_database
oracle    4831     1  0 10:52 ?        00:00:00 ora_s000_database
oracle    9105  9068  0 11:55 pts/8    00:00:00 grep ora_ 
[oracle@os5 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Wed Nov 21 10:54:16 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1],
[3464], [12432], [12534], [], [], [], [], [], [], []


SQL> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

SQL>

          从上可以看到,当前数据库停留在MOUNT状态,没法OPEN,然后,就用RMAN重新做了个全备,并手工拷贝了联机日志文件,保留现场!

         2 接下来,重新尝试以RESETLOGS方式也打不开,尝试RECOVER依然报错

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01139: RESETLOGS option only valid after an incomplete database recovery


SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-00264: no recovery required


SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [3464], [12432], [12534], [], [], [], [], [], [], []


SQL>

        3  那么接下来,看看ALERT日志都记录哪些有用的信息:

Wed Nov 21 10:59:16 2012
alter database open
Beginning crash recovery of 1 threads
 parallel recovery started with 2 processes
Started redo scan
Completed redo scan
 read 1642 KB redo, 0 data blocks need recovery
Errors in file /oracle/ora11gR2/diag/rdbms/database/database/trace/database_ora_4848.trc  (incident=129790):
ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [3464], [12432], [12534], [], [], [], [], [], [], []
Incident details in: /oracle/ora11gR2/diag/rdbms/database/database/incident/incdir_129790/database_ora_4848_i129790.trc
Aborting crash recovery due to error 600
Errors in file /oracle/ora11gR2/diag/rdbms/database/database/trace/database_ora_4848.trc:
ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [3464], [12432], [12534], [], [], [], [], [], [], []
Errors in file /oracle/ora11gR2/diag/rdbms/database/database/trace/database_ora_4848.trc:
ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [3464], [12432], [12534], [], [], [], [], [], [], []
ORA-600 signalled during: alter database open...
Wed Nov 21 10:59:17 2012
Sweep [inc][129790]: completed
Sweep [inc2][129790]: completed
Wed Nov 21 10:59:17 2012
Trace dumping is performing id=[cdmp_20121121105917]
Wed Nov 21 11:07:50 2012
db_recovery_file_dest_size of 3852 MB is 0.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.

        从alert里,一直看到ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [3464], [12432], [12534], [], [], [], [], [], [], []的错误,还有一些提示就是Errors in file /oracle/ora11gR2/diag/rdbms/database/database/trace/database_ora_4848.trc。

        4 就去看看上述的TRACE文件:

*** 2012-11-21 10:59:16.123
Successfully allocated 2 recovery slaves
Using 66 overflow buffers per recovery slave
Thread 1 checkpoint: logseq 3464, block 4945, scn 203336710
  cache-low rba: logseq 3464, block 9148
    on-disk rba: logseq 3464, block 12534, scn 203340512
  start recovery at logseq 3464, block 9148, scn 0

*** 2012-11-21 10:59:16.144
Started writing zeroblks thread 1 seq 3464 blocks 12432-12439

*** 2012-11-21 10:59:16.144
Completed writing zeroblks thread 1 seq 3464
==== Redo read statistics for thread 1 ====
Total physical reads (from disk and memory): 4096Kb
-- Redo read_disk statistics --
Read rate (ASYNC): 1642Kb in 0.02s => 80.18 Mb/sec
Longest record: 14Kb, moves: 0/1953 (0%)
Change moves: 22/192 (11%), moved: 0Mb
Longest LWN: 445Kb, moves: 0/733 (0%), moved: 0Mb
Last redo scn: 0x0000.0c1eba6c (203340396)
----------------------------------------------
----- Recovery Hash Table Statistics ---------
Hash table buckets = 32768
Longest hash chain = 0
Average hash chain = 0/0 = 0.0
Max compares per lookup = 0
Avg compares per lookup = 0/0 = 0.0
----------------------------------------------
WARNING! Crash recovery of thread 1 seq 3464 is
ending at redo block 12432 but should not have ended before
redo block 12534
Incident 129790 created, dump file: /oracle/ora11gR2/diag/rdbms/database/database/incident/incdir_129790/database_ora_4848_i129790.trc
ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [3464], [12432], [12534], [], [], [], [], [], [], []

ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [3464], [12432], [12534], [], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [3464], [12432], [12534], [], [], [], [], [], [], []

        5 结合ALERT里的错误ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [3464], [12432], [12534], 和TRACE里的错误提示WARNING! Crash recovery of thread 1 seq 3464 is ending at redo block 12432 but should not have ended before redo block 12534 以及查询MetaLink文档ID 1299564.1获取的指导性信息,应该是由于服务器异常断电,导致LGWR写联机日志文件时失败,下次重新启动数据库时,需要做实例级恢复,而又无法从联机日志文件里获取到这些redo信息,因为上次断电时,写日志失败了。

          那么ORA-00600的错误里,那几个参数 [1], [3464], [12432], [12534]又表示什么呢? 从EYGLE的网站上查询到类似的案例信息,结合本故障场景分析,原来是实例需要恢复日志序列号为3464的联机日志文件,需要恢复到编号为12534的日志块,而实际上只能恢复到第12432个日志块儿,所以库就启不来了。不过,从当前日志文件信息,可以看到,当前日志组的确是3464:

SQL> alter session set nls_date_format='yyyy/mm/dd hh24:mi:ss';

Session altered.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME          NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------------- ------------ -------------------
         1          1       3463   52428800        512          1 YES INACTIVE             203285629 2012/11/21 03:00:32    203335249 2012/11/21 08:59:46
         3          1       3462   52428800        512          1 YES INACTIVE             203238520 2012/11/20 21:00:52    203285629 2012/11/21 03:00:32
         2          1       3464   52428800        512          1 NO  CURRENT              203335249 2012/11/21 08:59:46   2.8147E+14

SQL> 

        6 参照MetaLink文档ID 1299564.1的方案来恢复数据库:

       

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 203360397 generated at 11/21/2012 09:21:51 needed for thread 1
ORA-00289: suggestion : /home/oracle/archive_no_delete/ARC_743097220_0000003464_1.arc
ORA-00280: change 203360397 for thread 1 is in sequence #3464


Specify log: {=suggested | filename | AUTO | CANCEL}
/oracle/ora11gR2/oradata/database/redo02.log
Log applied.
Media recovery complete.
SQL> alter database open ;
alter database open
*
ERROR at line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


SQL>  alter database open resetlogs;

Database altered.

SQL> 

          至此,这个库是成功恢复,并且启动了。恢复之后,再次对数据库做了备份。看到Alert日志信息

Wed Nov 21 12:43:44 2012
ALTER DATABASE RECOVER    LOGFILE '/oracle/ora11gR2/oradata/database/redo02.log'  
Media Recovery Log /oracle/ora11gR2/oradata/database/redo02.log
Wed Nov 21 12:43:44 2012
Incomplete recovery applied all redo ever generated.
Recovery completed through change 203360398 time 11/21/2012 09:21:51
Media Recovery Complete (database)
Completed: ALTER DATABASE RECOVER    LOGFILE '/oracle/ora11gR2/oradata/database/redo02.log'  
Wed Nov 21 12:43:55 2012
alter database open 
Errors in file /oracle/ora11gR2/diag/rdbms/database/database/trace/database_ora_10698.trc:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
ORA-1589 signalled during: alter database open ...
Wed Nov 21 12:44:37 2012
 alter database open resetlogs
Archived Log entry 3382 added for thread 1 sequence 3463 ID 0xd9842c95 dest 1:
Archived Log entry 3383 added for thread 1 sequence 3464 ID 0xd9842c95 dest 1:
Archived Log entry 3384 added for thread 1 sequence 3462 ID 0xd9842c95 dest 1:
RESETLOGS after complete recovery through change 203360398
Resetting resetlogs activation ID 3649318037 (0xd9842c95)
Wed Nov 21 12:44:43 2012
Setting recovery target incarnation to 4
Wed Nov 21 12:44:43 2012
Assigning activation ID 3706166088 (0xdce79b48)
LGWR: STARTING ARCH PROCESSES
Wed Nov 21 12:44:43 2012
ARC0 started with pid=24, OS id=11236 
ARC0: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Wed Nov 21 12:44:44 2012
ARC1 started with pid=25, OS id=11238 
Wed Nov 21 12:44:44 2012
ARC2 started with pid=26, OS id=11240 
Thread 1 opened at log sequence 1
  Current log# 1 seq# 1 mem# 0: /oracle/ora11gR2/oradata/database/redo01.log
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Nov 21 12:44:44 2012
SMON: enabling cache recovery
ARC1: Archival started
ARC2: Archival started
Wed Nov 21 12:44:44 2012
ARC3 started with pid=29, OS id=11242 
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
ARC2: Becoming the heartbeat ARCH
Successfully onlined Undo Tablespace 2.
Dictionary check beginning
Dictionary check complete
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed

 

        从Metalink上看到,目前这个ORA-600的错误应该只有在Oracle 11g上才有的。    

记录一则Windows操作系统重装后的Oracle数据库恢复

           案例背景:一套跑在Windows 2K3 SP1 32位系统下的Oracle 10g 10.2.0.2.0的单实例数据库,数据库软件和数据库文件都放在D盘。遭遇到该服务器主板故障之后,更换主板,重新安装Windows 2K3 SP1 32位操作系统。成功配置好Windows操作系统之后,下面简单记录一下这个案例的恢复过程。

           1 将之前的数据库安装目录重命名,将D:\oracle 重命名为D:\oracle_old;

           2 重新安装Oracle软件,不建库,数据库版本同之前的版本一致,并且将安装目录指向D:\oracle;

           3 再次将新安装的D:\oracle重命名为D:\oracle_new,重命名第1步骤中的D:\oracle_old为D:\oracle:

           4 重新配置监听,设置ORACLE_SID环境变量,环境变量同之前的ORACLE_SID=cssdb,创建Oracle数据库服务

C:\Documents and Settings\huangw>set oracle_sid=cssdb

C:\Documents and Settings\huangw>oradim
ORADIM:  [options].  Refer to manual.
Enter one of the following command:
Create an instance by specifying the following options:
     -NEW -SID sid | -SRVC srvc | -ASMSID sid | -ASMSRVC srvc [-SYSPWD pass]
 [-STARTMODE auto|manual] [-SRVCSTART system|demand] [-PFILE file | -SPFILE]
 [-SHUTMODE normal|immediate|abort] [-TIMEOUT secs] [-RUNAS osusr/ospass]
Edit an instance by specifying the following options:
     -EDIT -SID sid | -ASMSID sid [-SYSPWD pass]
 [-STARTMODE auto|manual] [-SRVCSTART system|demand] [-PFILE file | -SPFILE]
 [-SHUTMODE normal|immediate|abort] [-SHUTTYPE srvc|inst] [-RUNAS osusr/ospass]
Delete instances by specifying the following options:
     -DELETE -SID sid | -ASMSID sid | -SRVC srvc | -ASMSRVC srvc
Startup services and instance by specifying the following options:
     -STARTUP -SID sid | -ASMSID sid [-SYSPWD pass]
 [-STARTTYPE srvc|inst|srvc,inst] [-PFILE filename | -SPFILE]
Shutdown service and instance by specifying the following options:
     -SHUTDOWN -SID sid | -ASMSID sid [-SYSPWD pass]
 [-SHUTTYPE srvc|inst|srvc,inst] [-SHUTMODE normal|immediate|abort]
 Query for help by specifying the following parameters: -? | -h | -help

C:\Documents and Settings\huangw>oradim -new -sid cssdb
Instance created.

C:\Documents and Settings\huangw>

           5 调用SQL*PLUS,发现权限不足的错误,重建口令文件,执行口令文件验证登录

C:\Documents and Settings\huangw>oradim -new -sid cssdb
Instance created.

C:\Documents and Settings\huangw>sqlplus /nolog

SQL*Plus: Release 10.2.0.2.0 - Production on Mon Nov 19 18:21:39 2012

Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.

SQL> conn / as sysdba
ERROR:
ORA-01031: insufficient privileges


SQL> exit

C:\Documents and Settings\huangw>
C:\Documents and Settings\huangw>orapwd file=PWDCSSDB.ora password=dehamdb entri
es=10

C:\Documents and Settings\huangw>sqlplus /nolog

SQL*Plus: Release 10.2.0.2.0 - Production on Mon Nov 19 18:23:49 2012

Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.

SQL> conn / as sysdba
ERROR:
ORA-01031: insufficient privileges


SQL> conn sys/dehamdb as sysdba
Connected to an idle instance.
SQL>

           6 直接启库,完成。


        小结:该案例的幸运之处在于Oracle软件和数据库文件都在D盘,Windows系统崩溃之后,只是影响到C盘下的数据和文件,好在对D盘下的文件没有破坏。其实重命名D:\oracle文件,以及重新安装oracle数据库软件的目的无非就是重新将Oracle软件的注册信息,环境变量等信息重新写到Windows系统和注册表中而已。

        最后,在系统的环境变量中重新添加ORACLE_SID和在系统服务项中将OracleServiceCSSDB服务配置为自动启动。

Oracle索引组织表学习系列二

4 索引组织表(IOT Index-Organized Table)适用什么场景

       在上一篇文章里,提到了IOT表的基本概念和创建的一些注意事项,接下来讨论一下关于IOT表和Heap使用的一个测试对比,用以说明IOT表适用于什么场景。

1 构建一个主表emp表:

[oracle@oracle11g arch_new]$ sqlplus 

SQL*Plus: Release 11.2.0.1.0 Production on Thu Nov 15 10:15:19 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning and Automatic Storage Management options

SQL> conn hr/hr
Connected.
SQL> create table emp
  2  as
  3  select object_id empno,
  4  object_name ename,
  5  created hiredate,
  6  owner job
  7  from all_objects
  8  ;

Table created.

SQL> select count(*) from emp;

  COUNT(*)
----------
     55636

SQL> alter table emp add constraint emp_pk primary key(empno);

Table altered.

SQL> exec dbms_stats.gather_table_stats(user,'EMP',cascade=>true);

PL/SQL procedure successfully completed.

SQL> 

 

2  构建一张子表,子表分别通过Heap和IOT两种方式来实现:

构建Heap类型子表

SQL> create table heap_addresses
  2  ( empno references emp(empno) on delete cascade,
  3  addr_type varchar2(10),
  4  street varchar2(20),
  5  city varchar2(20),
  6  state varchar2(2),
  7  zip number,
  8  primary key (empno,addr_type)
  9  );

Table created.

SQL> 

构建IOT类型子表

SQL> create table iot_addresses
  2  ( empno references emp(empno) on delete cascade,
  3  addr_type varchar2(10),
  4  street varchar2(20),
  5  city varchar2(20),
  6  state varchar2(2),
  7  zip number,
  8  primary key (empno,addr_type)
  9  )
 10  ORGANIZATION INDEX
 11  ;

Table created.

SQL> 

3  分别向两种子表里构造数据,插入WORK,HOME,ORIGINAL,SCHOOL地址:

Heap类型子表插入

SQL> insert into heap_addresses
  2  select empno, 'WORK', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into heap_addresses
  2  select empno, 'HOME', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into heap_addresses
  2  select empno, 'ORIGINAL', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into heap_addresses
  2  select empno, 'SCHOOL', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> select count(*) from heap_addresses;

  COUNT(*)
----------
    222544

SQL> 

IOT类型子表插入

SQL> insert into iot_addresses
  2  select empno, 'WORK', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into iot_addresses
  2  select empno, 'HOME', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into iot_addresses
  2  select empno, 'ORIGINAL', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into iot_addresses
  2  select empno, 'SCHOOL', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> select count(*) from iot_addresses;

  COUNT(*)
----------
    222544

SQL> 

4 分别收集两种子表的统计信息

SQL> exec dbms_stats.gather_table_stats(user,'HEAP_ADDRESSES');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(user,'IOT_ADDRESSES');

PL/SQL procedure successfully completed.

SQL> 

5 对比使用不同类型的子表,SQL的执行计划

SQL> set line 200
SQL> set autot trace
SQL> select *
  2  from emp, heap_addresses
  3  where emp.empno = heap_addresses.empno
  4  and emp.empno = 100;


Execution Plan
----------------------------------------------------------
Plan hash value: 541875893

-----------------------------------------------------------------------------------------------
| Id  | Operation                    | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                |     4 |   368 |     8   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |                |     4 |   368 |     8   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| EMP            |     1 |    45 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | EMP_PK         |     1 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID| HEAP_ADDRESSES |     4 |   188 |     6   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN          | SYS_C00138141  |     4 |       |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("EMP"."EMPNO"=100)
   5 - access("HEAP_ADDRESSES"."EMPNO"=100)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         11  consistent gets
          0  physical reads
          0  redo size
       1351  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          4  rows processed

SQL> 
SQL> select *
  2  from emp, iot_addresses
  3  WHERE emp.empno = iot_addresses.empno
  4  and emp.empno = 100;


Execution Plan
----------------------------------------------------------
Plan hash value: 1475200359

---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                    |     4 |   368 |     4   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |                    |     4 |   368 |     4   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| EMP                |     1 |    45 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | EMP_PK             |     1 |       |     1   (0)| 00:00:01 |
|*  4 |   INDEX RANGE SCAN           | SYS_IOT_TOP_184800 |     4 |   188 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("EMP"."EMPNO"=100)
   4 - access("IOT_ADDRESSES"."EMPNO"=100)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
       1351  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          4  rows processed

SQL> 

6 小结:通过分析上述2条“几乎”完全相同的SQL语句的执行计划,我们可以看到:

        对于第1个SQL(子表通过Heap来实现),首先通过EMP_PK这个主键来访问主表EMP,得到主表的行记录,然后通过得到的主键EMPNO字段来访问子表HEAP_ADDRESS,通过子表HEAP_ADDRESS的索引SYS_C00138141来获取子表记录。整个SQL耗费了11次内存读,COST为8。

        而对于第2个SQL(子表通过IOT来实现),首先通过EMP_PK这个主键来访问主表EMP,得到主表的行记录,然后直接通过EMPNO字段来访问子表IOT_ADDRESS,而EMPNO字段同时也是子表IOT_ADDRESS的主键字段,这样通过IOT表的主键字段来访问数据就非常快了。整个SQL耗费8次内存读,COST为4,是第1条SQL的1/2。

IOT表适用的场景:

① 应用中有完全由主键字段构成的表;

②  应用中有只会通过主键来访问数据的表;

③  希望数据以某种特定顺序存储的表;

IOT表不适用于使用非主键字段来查询数据。

Oracle索引组织表学习系列一

1 索引组织表(IOT Index-Organized Table)概述      
 

       在Oracle数据库中,有一类表被称之为索引组织表,即IOT(Index-Organized Table)。顾名思义,所谓的索引组织表,表面上看是一种表,实质上其数据是以索引的形式来存放的,也就是说IOT表不会占用表段,其占用的是索引段。

       相比较传统的堆表( heap-organized table,即常见的普通表)而言,IOT表的数据是以已经按主键字段排好序后存放在B-tree索引中的,而堆表的数据则是无序的存放在表段的数据块中。此外,在IOT的索引叶子节点块中,既存放主键字段数据,也存放非主键字段的值。

        一张IOT表同Heap表的对比示意图:

2 索引组织表(IOT Index-Organized Table)特征

 

       通过IOT表的主键字段来访问数据可以快速完成,因为IOT表的数据全部存放在B-Tree索引上,只需定位到索引上的数据即可,而无需再去像访问heap表那样进一步通过索引去定位表段上的数据;对IOT表执行DML操作,只会影响到B-Tree索引;

      通过IOT表的主键字段快速范围访问数据很快,因为记录已经是事先按主键排好序的;

      IOT表可以有效的降低存储开销,因为主键字段的数据只是存放在B-Tree索引上,并没有像heap那样,主键字段数据既存放在表段上,也存放在索引上;

      IOT表除了像Heap表那样可以支持约束、触发器、LOB字段、对象字段、分区、并行操作、在线重定义、复制操作等,还支持主键字段压缩、提供溢出存储段(Overflow storage area )、二次索引(Secondary indexes, including bitmap indexes)

 

3 创建索引组织表(IOT Index-Organized Table)

 

   创建IOT表时,必须包含下列2个从句:

        1 ORGANIZATION INDEX,用来标识该表是IOT表;

        2 在建表的同时要指定主键约束,可以是单字段主键,也可以是复合主键约束。

    创建IOT表时,也可以同时指定下列3个从句:

        1 OVERFLOW从句,用于标识非主键字段存放在独立的溢出存储段数据区

        2 PCTTHRESHOLD value,如果指定了溢出存储段的话,该值用于限定可以存放在索引数据块中的最大数据的百分比,即如果IOT表中的行记录超过该值的话,剩余的字段就存放在溢出存储段数据区。也就是说,IOT表中的一条记录有可能被拆分成两部分:头数据区(Head Piece)和尾数据区(Tail Piece)。将主键字段和不超过PCTTHRESHOLD限定的其它非主键字段存放在头数据区,而将其它的非主键字段存放在尾数据区。因此,此时的IOT表的索引记录存放的数据就成了主键字段+满足PCTTHRESHOLD限定的其它非主键字段+指针,指针指向剩余非主键字段存放的地址;PCTTHRESHOLD的取值范围是1-50,默认值是50;

        3 INCLUDING从句,用于显示声明哪些非主键字段可以和主键字段一起存放在索引数据块中。这样,剩下的非主键字段就会存放到独立的溢出存储段数据区。

     创建IOT表的一个示例:

OS版本:

[oracle@oracle11g ~]$ uname -rm
2.6.18-194.el5 x86_64
[oracle@oracle11g ~]$

Oracle版本:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> 

建表:

SQL> conn hr/hr
Connected.
SQL> CREATE TABLE admin_docindex(
  2  token char(20), 
  3  doc_id NUMBER,
  4  token_frequency NUMBER,
  5  token_offsets VARCHAR2(2000),
  6  CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
  7  ORGANIZATION INDEX 
  8  TABLESPACE admin_tbs
  9  PCTTHRESHOLD 20
 10  OVERFLOW TABLESPACE admin_tbs2;

Table created.

SQL> insert into ADMIN_DOCINDEX values('oracleonlinux.cn',1,1,'test iot table');

1 row created.

SQL> commit;

Commit complete.

SQL> 

查看该表的信息,可以看到该表不属于任何表空间,因为没有数据段:

SQL> select table_name,tablespace_name,iot_name,iot_type from user_tables where table_name='ADMIN_DOCINDEX';

TABLE_NAME                     TABLESPACE_NAME                IOT_NAME                       IOT_TYPE
------------------------------ ------------------------------ ------------------------------ ------------
ADMIN_DOCINDEX                                                                               IOT

SQL> select segment_name,segment_type,tablespace_name from user_segments where SEGMENT_NAME='ADMIN_DOCINDEX';

no rows selected

SQL> 

查看索引的信息,我们能看到索引反而占有表空间:

SQL> select index_name,index_type,table_name,tablespace_name,table_type ,PCT_THRESHOLD from user_indexes where table_name='ADMIN_DOCINDEX';

INDEX_NAME                     INDEX_TYPE                  TABLE_NAME                     TABLESPACE_NAME                TABLE_TYPE  PCT_THRESHOLD
------------------------------ --------------------------- ------------------------------ ------------------------------ ----------- -------------
PK_ADMIN_DOCINDEX              IOT - TOP                   ADMIN_DOCINDEX                 ADMIN_TBS                      TABLE                  20

SQL> select segment_name,segment_type,tablespace_name from user_segments where SEGMENT_NAME='PK_ADMIN_DOCINDEX';

SEGMENT_NAME                                                                      SEGMENT_TYPE       TABLESPACE_NAME
--------------------------------------------------------------------------------- ------------------ ------------------------------
PK_ADMIN_DOCINDEX                                                                 INDEX              ADMIN_TBS

SQL> 

       最后,关于IOT表的学习下篇继续。

使用Log Miner恢复数据的案例一则

        上周五(9月21日上午11点左右),收到项目组的一封紧急邮件:

生产数据库中,FIN_CASH_MOVEMENTFIN_CASH_DETAIL这两张表的数据91号到919号的数据都被删除了。

烦请提供下技术支持,恢复这两张表的数据。待回复。谢谢!

经过沟通,初步了解到系统的信息是:这是一套运行在IBM P750的小机上的64位的11gR2的单实例数据库。其实,这套环境也是之前的一篇文章里[记录一次在IBM P750小机上给Oracle动态扩展存储]提到的系统。

进一步了解,确认数据库中FIN_CASH_MOVEMENTFIN_CASH_DETAIL这两张表的数据在9月20号下午3点左右被误删除了,且这两张表是主子表的关系。

我首先想到的方法是,尝试使用事务的闪回查询,看看能否找回数据?结果很不幸,由于是生产数据库,事务繁忙,且误操作离当前时间较长(差不错相差20个小时),UNDO表空间中的回滚数据被覆盖了,遇到了ORA-01555回滚过旧的错误。显然,这条路是走不通了。

接下来,看看系统中是否有之前的有效的逻辑备份?如果有的话,可以用逻辑恢复的方式来尝试找回数据,再次不幸,该系统中采用的RMAN备份,并无逻辑备份。显然,该方法同样不凑效。

那么,我能想到的方法就是对全库做基于时间点的不完全恢复或者使用Oracle 自带的Log Miner工具来挖掘数据了。而该生产库的数据量很大,如果使用基于时间的不完全恢复的话,又有种种弊端和风险。比如,肯定得在一套独立的测试库上执行基于时间的不完全恢复,还有就是rman备份文件很大,这个显然是下下策了。

最后,选择Log Miner工具来尝试找回数据了。下面,记录一下这次的主要过程:

1  首先,找出系统中涵盖误操作时间段的归档日志,这里找出9月20日15点到17点之间的归档:

select name,FIRST_TIME from v$archived_log where first_time between to_date('2012/09/20 14:50:00','yyyy/mm/dd hh24:mi:ss') and to_date('2012/09/20 17:00:00','yyyy/mm/dd hh24:mi:ss');
NAME                                                                   FIRST_TIME
---------------------------------------------------------------------- -------------------
/oraflash/SITCDB/archivelog/2012_09_20/o1_mf_1_123956_85ohgybj_.arc    2012/09/20 14:52:41
/oraflash/SITCDB/archivelog/2012_09_20/o1_mf_1_123957_85ohn6qh_.arc    2012/09/20 14:55:26
...
...
/oraflash/SITCDB/archivelog/2012_09_20/o1_mf_1_123984_85olkh0t_.arc    2012/09/20 15:45:49
/oraflash/SITCDB/archivelog/2012_09_20/o1_mf_1_123985_85olo3vb_.arc    2012/09/20 15:47:59
/oraflash/SITCDB/archivelog/2012_09_20/o1_mf_1_123986_85olqg4r_.arc    2012/09/20 15:49:55
...
/oraflash/SITCDB/archivelog/2012_09_20/o1_mf_1_123992_85om9wo7_.arc    2012/09/20 15:59:07

2  调用dbms_logmnr系统包,添加归档日志:

SQL>exec dbms_logmnr.add_logfile(logfilename=>'/oraflash/SITCDB/archivelog/2012_09_20/o1_mf_1_123956_85ohgybj_.arc',options=>dbms_logmnr.new);

3  调用dbms_logmnr系统包,启动Log Miner开始挖掘日志:

SQL>exec dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);

4  从v$logmnr_contents系统表中,查看是否包含FIN_CASH_DETAIL表的SQL语句:

SQL>select timestamp,table_name,sql_redo,sql_undo,operation from v$logmnr_contents where table_name='FIN_CASH_DETAIL';

5  调用dbms_logmnr系统包,停止Log Miner:

exec dbms_logmnr.end_logmnr;

6  重复上述2~5步骤的动作,只是每次需要添加的归档日志不同而已。终于,在/oraflash/SITCDB/archivelog/2012_09_20/o1_mf_1_123985_85olo3vb_.arc这份归档日志中,均找到有FIN_CASH_MOVEMENT和FIN_CASH_DETAIL这两张表的操作。也就是误操作的时间应该是在2012/09/20 15:47:59到2012/09/20 15:49:55之间。

select sql_redo from v$logmnr_contents where table_name='FIN_CASH_MOVEMENT' and operation='DELETE';
SQL_REDO
---------------------------------------------------------------------------------------------------
delete from "SITCLINE"."FIN_CASH_MOVEMENT" where "CASH_MOVEMENT_ID" = '2c2881d63987424d01398b77fa2f6345' and "RP_ID" = 'R' and "OFFICE_ID" = 'SITTP' and "STATE_IND" = '0' 
and "MOVEMENT_TYPE" = 'CHECK' and "MOVEMENT_NO" = 'CR201209030138' and "BANK_MOVEMENT_NO" = '010060936' and "MOVEMENT_DATE" 
= TO_DATE('03-9月 -12', 'DD-MON-RR') and "LEDGER_PARTNER_CODE" = '80273312' and "LEDGER_PARTNER_NAME" IS NULL 
and "EXTERNAL_BANK_NAME" IS NULL and "EXTERNAL_BANK_ACCOUNT" = '056637' and "EXTERAL_BANK_ACCOUN
T_NAME" IS NULL and "INTERNAL_BANK_ID" = '2c2881d63978294201397a864fe30c1e' and "INTERNAL_BANK_NAME" = '花旗台灣' and "INTERNAL_BACNT_ID" = '2c2881d63978294201397a8c72be0c6c' and "INTERNAL_BANK_ACCOUNT_CODE" = '5049328003' 
and "INTERNAL_BANK_ACCOUNT_NAME" = 'SITC STEAMSHIPS CO LTD TAIWAN BRANCH' and "PRIME_CURRENCY_CODE" = 'NTD' and "PRIME_CURRENCY_VALUE" = '10799' and "BASE_CURRENCY_CODE" IS NULL and "BASE_CURRENCY_VALUE" IS NULL and "REMARK" IS NULL and "REALRP" = '1' and "REALRP_DATE" 
IS NULL and
 "REALRP_PERSON" IS NULL and "REALRP_PERSON_NAME" IS NULL and "DISCOUNT_VALUE" IS NULL and "DISCOUNT_REMARK" 
IS NULL and "RATE_BASE" IS NULL and "ALLOCATION_EVENT_ID" IS NULL and "DEPOSIT_DATE" = TO_DATE('03-9月 -12', 'DD-MON-RR') and "INVOICE_INFO" IS NULL and "CREATED_BY_USER" = 'FIN_TWPEI05' and "CREATED_OFFICE" = 'TP_FIN_DP' and "CREATED_DTM_LOC" = TO_DATE('03-9月 -12', 'DD-MON-RR') and "CREATED_TIME_ZONE" 
IS NULL and "UPDATED_BY_USER" = 'FIN_TWPEI05' and "UPDATED_OFFICE" = 'TP_FIN_DP' and "UP
DATED_DTM_LOC" = TO_DATE('03-9月 -12', 'DD-MON-RR') and "UPDATED_TIME_ZONE" IS NULL and "RECORD_VERSION" = '0' and "PRINCIPAL_GROUP_CODE" = 'SIT' 
and "CHECK_NO" = '6822983' and "PRINTED_PERSON" IS NULL and "IS_PRINTED" = '0' and "PRINTED_PERSON_NAME" IS NULL and "PRINTED_DATE" IS NULL and "BANK_EXCHANGE_NO" 
IS NULL and "INVOICE_AMOUNT" IS NULL and "SHORT_OVER_AMOUNT" = '0' and "SAP_STATUS" = '0' and "ARP_ID" IS NULL and ROWID = 'AAATyIAAUAAAMQ7AAH';

...

7  发现对于主、子表FIN_CASH_MOVEMENT、FIN_CASH_DETAIL的误操作分别删除了1390和1911条数据。生成下述的反向SQL,并把SQL脚本交给项目组确认,数据是否正确?

select sql_undo from v$logmnr_contents where table_name='FIN_CASH_MOVEMENT' and operation='DELETE';
SQL_UNDO
---------------------------------------------------------------------------------------------------
insert into "SITCLINE"."FIN_CASH_MOVEMENT"("CASH_MOVEMENT_ID","RP_ID","OFFICE_ID","
STATE_IND","MOVEMENT_TYPE","MOVEMENT_NO","
BANK_MOVEMENT_NO","MOVEMENT_DATE","LEDGER_PARTNER_CODE","
LEDGER_PARTNER_NAME","EXTERNAL_BANK_NAME","
EXTERNAL_BANK_ACCOUNT","EXTERAL_BANK_ACCOUNT_NAME","
INTERNAL_BANK_ID","INTERNAL_BANK_NAME","INTERNAL_BACNT_ID","
INTERNAL_BANK_ACCOUNT_CODE","INTERNAL_BANK_ACCOUNT_NAME","
PRIME_CURRENCY_CODE","PRIME_CURRENCY_VALUE","BASE_CURRENCY_CODE","
BASE_CURRENCY_VALUE","REMARK","REALRP","REALRP_DATE","
REALRP_PERSON","REALRP_PERSON_NAME","DISCOUNT_VALUE","
DISCOUNT_REMARK","RATE_BASE","A
LLOCATION_EVENT_ID","DEPOSIT_DATE","INVOICE_INFO","
CREATED_BY_USER","CREATED_OFFICE","CREATED_DTM_LOC","
CREATED_TIME_ZONE","UPDATED_BY_USER","UPDATED_OFFICE","
UPDATED_DTM_LOC","UPDATED_TIME_ZONE","RECORD_VERSION","
PRINCIPAL_GROUP_CODE","CHECK_NO","PRINTED_PERSON","
IS_PRINTED","PRINTED_PERSON_NAME","PRINTED_DATE","
BANK_EXCHANGE_NO","INVOICE_AMOUNT","SHORT_OVER_AMOUNT","
SAP_STATUS","ARP_ID") values ('2c2881d63987424d01398b77fa2f6345','R','SITTP','0','CHECK','CR201209030138','010060936',TO_DATE('03-9月 -12', 'DD-MON-RR'),'80273312',NULL,NULL,'056637',NULL,'2c2881d63978294201397a864fe30c1e','花旗台灣',
'2c2881d63978294201397a8c72be0c6c','5049328003','SITC STEAMSHIPS CO LTD TAIWAN BRANCH','NTD','10799',NULL,NULL,NULL,'1',
NULL,NULL,NULL,NULL,NULL,NULL,NULL,TO_DATE('03-9月 -12', 'DD-MON-RR'),NULL,'FIN_TWPEI05','TP_FIN_DP',TO_DATE('03-9月 -12', 'DD-MON-RR'),NULL,'FIN_TWPEI05','TP_FIN_DP',TO_DATE('03-9月 -12', 'DD-MON-RR'),NULL,'0','SIT','6822983',
NULL,'0',NULL,NULL,NULL,NULL,'0','0',NULL);

...
select sql_undo from v$logmnr_contents where table_name='FIN_CASH_DETAIL' and operation='DELETE';
SQL_UNDO
---------------------------------------------------------------------------------------------------
insert into "SITCLINE"."FIN_CASH_DETAIL"("CASH_DETAIL_ID","CASH_MOVEMENT_ID","INVOICE_NO","
VESSEL_CODE","VOYAGE_NO","VOYAGE_LEG","BL_NO","
AMOUNT","CURRENCY","RATE","INVOICE_DOC_ID","
FREIGHT_ITEM_ID","CREATED_BY_USER","CREATED_OFFICE","
CREATED_DTM_LOC","CREATED_TIME_ZONE","UPDATED_BY_USER","
UPDATED_OFFICE","UPDATED_DTM_LOC","UPDATED_TIME_ZONE","
RECORD_VERSION","PRINCIPAL_GROUP_CODE") values ('2c2881d63987424d01398b77fa2f6346','2c2881d63987424d01398b77fa2f6345',
'EZ03404580','STKE','1236','N','SITGKESH002049','
10799',NULL,NULL,'2c2881d63987424d01398a8999e22caa',NULL,'FIN_TWPEI05','
TP_FIN_DP',TO_D
ATE('03-9月 -12', 'DD-MON-RR'),NULL,'FIN_TWPEI05','TP_FIN_DP',TO_DATE('03-9月 -12', 'DD-MON-RR'),NULL,'0','SIT');
...

8  最后,项目组确认之后,重新执行反向的SQL脚本,并发邮件过来,确认数据全部找回

后记:项目组发布出来,确认引起该错误的原因是程序bug,已经修复。从这次的恢复数据过程中,我们说在生产系统上的程序也好,人为操作数据库也好,一定要谨慎。同样,数据库的备份也不容忽视!

 

Oracle splitting partitions简单小结

            本文简单记录在Oracle 10g数据库上对范围分区表的Splitting Partitions测试过程和结论,并不涉及到Oracle数据库中分区技术的详细描述。

1 测试环境及平台:

OS:

[root@localhost ~]# uname -rm
2.6.18-164.el5 x86_64
[root@localhost ~]#

Oracle:

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE    10.2.0.5.0      Production
TNS for Linux: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production

SQL> 

2 创建范围分区表:

SQL> show user;
USER is "SYS"
SQL> create table part_range(id number,name varchar2(30))
  2  partition by range(id)
  3  (partition partmax values less than (maxvalue))
  4  tablespace users;

Table created.

SQL>

3 插入测试数据:

SQL> insert into part_range select object_id,object_name from dba_objects
  2  where object_id<2000;

1953 rows created.

SQL>

4 在分区表part_range上创建2种分区索引:

本地分区索引【Locally partitioned index】

SQL> create index part_range_id_idx on part_range(id) local;

Index created.

SQL>

全局分区索引【Globally partitioned index】

SQL> create index part_range_name_idx on part_range(name) tablespace users;

Index created.

SQL>

5 查看分区表信息:

SQL> select table_name,partition_name,tablespace_name from user_tab_partitions
  2  where table_name='PART_RANGE';

TABLE_NAME                     PARTITION_NAME                 TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
PART_RANGE                     PARTMAX                        USERS

SQL> 

6 查看索引信息:

SQL> select index_name,partition_name,tablespace_name,status from user_ind_partitions
  2  where index_name='PART_RANGE_ID_IDX';

INDEX_NAME                     PARTITION_NAME                 TABLESPACE_NAME                STATUS
------------------------------ ------------------------------ ------------------------------ --------
PART_RANGE_ID_IDX              PARTMAX                        USERS                          USABLE

SQL> select index_name,table_name,tablespace_name,status,partitioned from user_indexes
  2  where table_name='PART_RANGE';

INDEX_NAME                     TABLE_NAME                     TABLESPACE_NAME                STATUS   PAR
------------------------------ ------------------------------ ------------------------------ -------- ---
PART_RANGE_ID_IDX              PART_RANGE                                                    N/A      YES
PART_RANGE_NAME_IDX            PART_RANGE                     USERS                          VALID    NO

SQL> 

7 对分区表part_range执行Splitting partitions【分区分裂】操作:

SQL> alter table part_range split partition partmax at (2000) 
  2  into (partition p1,partition partmax);

Table altered.

SQL> 

注意,这里的分区分裂操作临界值是2000,即id<2000的记录将全部重组到p1分区,而partmax分区将为空,即0记录

SQL> select count(*) from part_range;

  COUNT(*)
----------
      1953

SQL> select count(*) from part_range partition(p1);

  COUNT(*)
----------
      1953

SQL> select count(*) from part_range partition(partmax);

  COUNT(*)
----------
         0

SQL> 

8 再次分别查看分区表、索引信息:

分区表:

SQL>  select table_name,partition_name,tablespace_name from user_tab_partitions
  2   where table_name='PART_RANGE';

TABLE_NAME                     PARTITION_NAME                 TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
PART_RANGE                     PARTMAX                        USERS
PART_RANGE                     P1                             USERS

SQL> 

索引信息:

SQL> select index_name,partition_name,tablespace_name,status from user_ind_partitions
  2  where index_name='PART_RANGE_ID_IDX';

INDEX_NAME                     PARTITION_NAME                 TABLESPACE_NAME                STATUS
------------------------------ ------------------------------ ------------------------------ --------
PART_RANGE_ID_IDX              P1                             USERS                          USABLE
PART_RANGE_ID_IDX              PARTMAX                        USERS                          USABLE

SQL> select index_name,table_name,tablespace_name,status,partitioned from user_indexes
  2  where table_name='PART_RANGE';

INDEX_NAME                     TABLE_NAME                     TABLESPACE_NAME                STATUS   PAR
------------------------------ ------------------------------ ------------------------------ -------- ---
PART_RANGE_ID_IDX              PART_RANGE                                                    N/A      YES
PART_RANGE_NAME_IDX            PART_RANGE                     USERS                          VALID    NO

SQL> 

小结:对于分区分裂之后,如果包含有空分区的话,那么对于本地分区索引和全局分区索引都是可用的。这种分区分裂的方式通常也叫做快速分裂【Fast Splitting】,索引不需要rebuild。

9 如果在上述步骤7中,执行的分区分裂操作如下:

SQL> alter table part_range split partition partmax at (1000)
  2  into (partition p1,partition partmax);

Table altered.

SQL> 

即分区分裂操作临界值是1000id<1000的记录将重组到p1分区,id>=1000的记录将重组到partmax分区。也就是此时,分裂出来的p1和partmax这两个分区均不为空。

SQL> select count(*) from part_range;

  COUNT(*)
----------
      1953

SQL> select count(*) from part_range partition(p1);

  COUNT(*)
----------
       953

SQL> select count(*) from part_range partition(partmax);

  COUNT(*)
----------
      1000

SQL> 

那么,查看到的分区表、索引信息如下:

分区表:

SQL> select table_name,partition_name,tablespace_name from user_tab_partitions
  2  where table_name='PART_RANGE';

TABLE_NAME                     PARTITION_NAME                 TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
PART_RANGE                     PARTMAX                        USERS
PART_RANGE                     P1                             USERS

SQL> 

索引信息:

SQL> select index_name,partition_name,tablespace_name,status from user_ind_partitions
  2  where index_name='PART_RANGE_ID_IDX';

INDEX_NAME                     PARTITION_NAME                 TABLESPACE_NAME                STATUS
------------------------------ ------------------------------ ------------------------------ --------
PART_RANGE_ID_IDX              P1                             USERS                          UNUSABLE
PART_RANGE_ID_IDX              PARTMAX                        USERS                          UNUSABLE

SQL> select index_name,table_name,tablespace_name,status,partitioned from user_indexes
  2  where table_name='PART_RANGE';

INDEX_NAME                     TABLE_NAME                     TABLESPACE_NAME                STATUS   PAR
------------------------------ ------------------------------ ------------------------------ -------- ---
PART_RANGE_ID_IDX              PART_RANGE                                                    N/A      YES
PART_RANGE_NAME_IDX            PART_RANGE                     USERS                          UNUSABLE NO

SQL> 

小结:对于分区分裂之后,如果不包含空分区的话,那么对于本地分区索引和全局分区索引都将不可用,索引的状态都变为UNUSABLE。均需要重建

SQL> alter index PART_RANGE_ID_IDX rebuild partition p1;

Index altered.

SQL> alter index PART_RANGE_ID_IDX rebuild partition partmax;

Index altered.

SQL> alter index PART_RANGE_NAME_IDX rebuild;

Index altered.

SQL> 

重建之后,本地分区索引、全局分区索引信息,已由UNUSABLE变为USABLE

SQL> select index_name,partition_name,tablespace_name,status from user_ind_partitions
  2  where index_name='PART_RANGE_ID_IDX';

INDEX_NAME                     PARTITION_NAME                 TABLESPACE_NAME                STATUS
------------------------------ ------------------------------ ------------------------------ --------
PART_RANGE_ID_IDX              P1                             USERS                          USABLE
PART_RANGE_ID_IDX              PARTMAX                        USERS                          USABLE

SQL> select index_name,table_name,tablespace_name,status,partitioned from user_indexes
  2  where table_name='PART_RANGE';

INDEX_NAME                     TABLE_NAME                     TABLESPACE_NAME                STATUS   PAR
------------------------------ ------------------------------ ------------------------------ -------- ---
PART_RANGE_ID_IDX              PART_RANGE                                                    N/A      YES
PART_RANGE_NAME_IDX            PART_RANGE                     USERS                          VALID    NO

SQL> 

当然,如果在分裂分区的同时带上UPDATE INDEXES的话,可以在分裂分区的同时重建索引【包含本地分区索引和全局分区索引,状态均为USABLE、VALID】:

SQL> alter table part_range split partition partmax at (1000) into (partition p1,partition partmax) update indexes;

Table altered.

SQL>

分裂分区之后,表信息:

SQL> select table_name,partition_name,tablespace_name from user_tab_partitions
  2  where table_name='PART_RANGE';

TABLE_NAME                     PARTITION_NAME                 TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
PART_RANGE                     PARTMAX                        USERS
PART_RANGE                     P1                             USERS

SQL> 

索引信息:

SQL> select index_name,partition_name,tablespace_name,status from user_ind_partitions where index_name='PART_RANGE_ID_IDX';

INDEX_NAME                     PARTITION_NAME                 TABLESPACE_NAME                STATUS
------------------------------ ------------------------------ ------------------------------ --------
PART_RANGE_ID_IDX              P1                             USERS                          USABLE
PART_RANGE_ID_IDX              PARTMAX                        USERS                          USABLE

SQL> select index_name,table_name,tablespace_name,status,partitioned from user_indexes where table_name='PART_RANGE';

INDEX_NAME                     TABLE_NAME                     TABLESPACE_NAME                STATUS   PAR
------------------------------ ------------------------------ ------------------------------ -------- ---
PART_RANGE_ID_IDX              PART_RANGE                                                    N/A      YES
PART_RANGE_NAME_IDX            PART_RANGE                     USERS                          VALID    NO

SQL> 

而如果在分裂分区的同时带上UPDATE GLOBAL INDEXES的话,可以在分裂分区的同时重建全局分区索引【不包含本地分区索引,只有全局分区索引状态为VALID】,而本地分区索引需要重建:

SQL> alter table part_range split partition partmax at (1000) into (partition p1,partition partmax) update global indexes;

Table altered.

SQL>

分裂分区之后,表信息:

SQL> select table_name,partition_name,tablespace_name from user_tab_partitions
  2  where table_name='PART_RANGE';

TABLE_NAME                     PARTITION_NAME                 TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
PART_RANGE                     PARTMAX                        USERS
PART_RANGE                     P1                             USERS

SQL> 

索引信息:

SQL> select index_name,partition_name,tablespace_name,status from user_ind_partitions where index_name='PART_RANGE_ID_IDX';

INDEX_NAME                     PARTITION_NAME                 TABLESPACE_NAME                STATUS
------------------------------ ------------------------------ ------------------------------ --------
PART_RANGE_ID_IDX              P1                             USERS                          UNUSABLE
PART_RANGE_ID_IDX              PARTMAX                        USERS                          UNUSABLE

SQL> select index_name,table_name,tablespace_name,status,partitioned from user_indexes where table_name='PART_RANGE';

INDEX_NAME                     TABLE_NAME                     TABLESPACE_NAME                STATUS   PAR
------------------------------ ------------------------------ ------------------------------ -------- ---
PART_RANGE_ID_IDX              PART_RANGE                                                    N/A      YES
PART_RANGE_NAME_IDX            PART_RANGE                     USERS                          VALID    NO

SQL> 

需要注意的是,在分裂分区的同时重建索引,将会消耗更多时间来完成分裂工作,以及消耗更多的系统资源。如果系统资源较为充足的话,可以考虑带上UPDATE INDEXES选项。

10 最后,再看看另外一种比较特殊的情况。在分裂分区的时候,如果将新分区指向新的表空间【由USERS到EXAMPLE表空间】的话,并且分裂之后,包含空分区的情况。即,分裂的语句如下:

SQL> alter table part_range split partition partmax at (2000)
  2  into (partition p1 tablespace example,partition partmax tablespace example);

Table altered.

SQL> select count(*) from part_range;

  COUNT(*)
----------
      1953

SQL> select count(*) from part_range partition(p1);

  COUNT(*)
----------
      1953

SQL> select count(*) from part_range partition(partmax);

  COUNT(*)
----------
         0

SQL> 

那么可以看到分区表:

SQL> select table_name,partition_name,tablespace_name from user_tab_partitions where table_name='PART_RANGE';

TABLE_NAME                     PARTITION_NAME                 TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
PART_RANGE                     PARTMAX                        EXAMPLE
PART_RANGE                     P1                             EXAMPLE

SQL> 

索引分区信息:

SQL>  select index_name,partition_name,tablespace_name,status from user_ind_partitions where index_name='PART_RANGE_ID_IDX';

INDEX_NAME                     PARTITION_NAME                 TABLESPACE_NAME                STATUS
------------------------------ ------------------------------ ------------------------------ --------
PART_RANGE_ID_IDX              P1                             EXAMPLE                        UNUSABLE
PART_RANGE_ID_IDX              PARTMAX                        EXAMPLE                        USABLE

SQL> select index_name,table_name,tablespace_name,status,partitioned from user_indexes where table_name='PART_RANGE';

INDEX_NAME                     TABLE_NAME                     TABLESPACE_NAME                STATUS   PAR
------------------------------ ------------------------------ ------------------------------ -------- ---
PART_RANGE_ID_IDX              PART_RANGE                                                    N/A      YES
PART_RANGE_NAME_IDX            PART_RANGE                     USERS                          UNUSABLE NO

SQL> 

小结:在分裂分区的时候,如果将新分区指向新的表空间的话,并且分裂之后,即使包含空分区的情况下,只有新空分区的本地索引不需要重建,而含有数据的新分区的本地分区索引以及全局分区索引均需重建。这有别于快速分裂,或者说是快速分裂的一种特殊情况

本文部分内容参考范大师

8月工作小记二三事

    上个月,应该算是一个较为忙碌的工作月。其中:

    1 公事的一DBA同事离职,所有的数据库这块全部压到我1个人的身上,要看的库多了,任务重了,有些库也只能做到粗犷式管理了;

    2 出差2次,月初1次到上海,月末1次同样到上海;2次差旅虽说都是现场实施Oracle数据库环境的搭建,但都不是很顺利,究其原因:①项目前期同客户需求、准备工作都很有问题。如去之前,问项目组人员是实施RAC的数据库还是配置单实例的库,告知于我是单实例的库,结果一到客户现场方知是实施RAC的数据库;②个人的技术经验匮乏的短板也暴露的很明显了,尤其是存储方面的知识;

    3 快速记录这2次出差上海解决问题的方法:

        ① 月初去上海某客户遇到的问题,客户环境采用Dell R910单台服务器,存储采用Dell MD 3200。在阵列的管理界面将存储映射到主机时,发现找不到主机,原因:犯的是低级错误,Dell MD 3200 支持级联扩展存储,误将主机的SAS卡插到阵列上的SAS OUT扩展口上,而应该将其连接到阵列上的SAS IN口。其次,在做好阵列之后,重启主机、阵列之后,发现主机依然无法找到对应的LUN,解决方法:注释掉/etc/udev/rules.d/90-dm.rules配置文件。

        ②第2次去上海另1客户实施RAC时,同样碰到存储的问题。只是这次的主机是2台IBM X3650M3,存储是IBM System Storage DS3500磁盘阵列柜。在阵列上做好LUN并映射到主机后,还是不能正常发现存储设备,解决办法是安装IBM提供的RDAC补丁包。下述,记录安装RDAC软件包的过程,该软件包需要到IBM官方网站下载对应的操作系统版本和阵列型号的补丁。

     –

[root@node1 ~]# cd linuxrdac-09.03.0C05.0638/
[root@node1 linuxrdac-09.03.0C05.0638]# ll
total 1780
-r-xr-xr-x 1 root root     73 Apr 18 04:31 bootMpp26p.sh
drwxr-xr-x 2 root root   4096 Apr 18 04:31 dracutsetup
-r-xr-xr-x 1 root root   3808 Apr 18 04:31 genfileattributes
-r--r--r-- 1 root root   2603 Apr 18 04:31 genuniqueid.c
-r--r--r-- 1 root root   2603 Apr 18 04:31 hbaCheck
-r--r--r-- 1 root root  15136 Apr 18 04:31 License.txt
-r-xr-xr-x 1 root root   6592 Apr 18 04:31 lsvdev
-r--r--r-- 1 root root  22991 Apr 18 04:31 Makefile
-r--r--r-- 1 root root 150075 Apr 18 04:31 mppCmn_s2tos3.c
-r--r--r-- 1 root root  21321 Apr 18 04:31 mppCmn_SysInterface.c
-r--r--r-- 1 root root    814 Apr 18 04:31 mpp.conf
-r--r--r-- 1 root root 520094 Apr 18 04:31 MPP_hba.c
-r-xr-xr-x 1 root root   5155 Apr 18 04:31 mppiscsi_umountall
drwxr-xr-x 2 root root   4096 Apr 18 04:31 mpp_linux_headers
drwxr-xr-x 2 root root   4096 Apr 18 04:31 mpp_linux_sys_headers
-r--r--r-- 1 root root  57111 Apr 18 04:31 mppLnx26p_sysdep.c
-r--r--r-- 1 root root 175028 Apr 18 04:31 mppLnx26p_upper.c
-r--r--r-- 1 root root 206555 Apr 18 04:31 mppLnx26p_vhba.c
-r--r--r-- 1 root root 185100 Apr 18 04:31 mppLnx26p_vhbaio.c
-r--r--r-- 1 root root  95675 Apr 18 04:31 mppLnx26p_vhbalib.c
-r--r--r-- 1 root root  21610 Apr 18 04:31 mppLnx26p_vhbamisc.c
-r--r--r-- 1 root root  33709 Apr 18 04:31 mppLnx26p_vhbaproc.c
-r--r--r-- 1 root root 120597 Apr 18 04:31 mppLnx26p_vhbatask.c
-r--r--r-- 1 root root   1805 Apr 18 04:31 mpp_rcscript.REDHAT
-r--r--r-- 1 root root   1903 Apr 18 04:31 mpp_rcscript.SUSE
-r-xr-xr-x 1 root root  28522 Apr 18 04:31 mppSupport
-r--r--r-- 1 root root  24245 Apr 18 04:31 Readme.txt
-rw-r--r-- 1 root root   5559 Apr 18 04:31 setupDriver.REDHAT
-r--r--r-- 1 root root   8984 Apr 18 04:31 setupDriver.SUSE
-r--r--r-- 1 root root    261 Apr 18 04:31 setupMpp26p.sh
drwxr-xr-x 2 root root   4096 Apr 18 04:31 utility
[root@node1 linuxrdac-09.03.0C05.0638]# make
make[1]: Entering directory `/usr/src/kernels/2.6.18-194.el5-x86_64'
  CC [M]  /root/linuxrdac-09.03.0C05.0638/MPP_hba.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppLnx26p_upper.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppLnx26p_sysdep.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppCmn_s2tos3.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppCmn_SysInterface.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppLnx26p_vhbamisc.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppLnx26p_vhbatask.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppLnx26p_vhba.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppLnx26p_vhbaproc.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppLnx26p_vhbalib.o
  CC [M]  /root/linuxrdac-09.03.0C05.0638/mppLnx26p_vhbaio.o
/root/linuxrdac-09.03.0C05.0638/mppLnx26p_vhbaio.c: In function a€?__mppLnx_scsi_donea€?:
/root/linuxrdac-09.03.0C05.0638/mppLnx26p_vhbaio.c:3691: warning: label a€?donea€? defined but not used
  LD [M]  /root/linuxrdac-09.03.0C05.0638/mppUpper.o
  LD [M]  /root/linuxrdac-09.03.0C05.0638/mppVhba.o
  Building modules, stage 2.
  MODPOST
  CC      /root/linuxrdac-09.03.0C05.0638/mppUpper.mod.o
  LD [M]  /root/linuxrdac-09.03.0C05.0638/mppUpper.ko
  CC      /root/linuxrdac-09.03.0C05.0638/mppVhba.mod.o
  LD [M]  /root/linuxrdac-09.03.0C05.0638/mppVhba.ko
make[1]: Leaving directory `/usr/src/kernels/2.6.18-194.el5-x86_64'
make[1]: Entering directory `/usr/src/kernels/2.6.18-194.el5-x86_64'
  Building modules, stage 2.
  MODPOST
make[1]: Leaving directory `/usr/src/kernels/2.6.18-194.el5-x86_64'
gcc  -I/root/linuxrdac-09.03.0C05.0638 -I/root/linuxrdac-09.03.0C05.0638/mpp_linux_headers/ -I/root/linuxrdac-09.03.0C05.0638/mpp_linux_sys_headers/ -c ./utility/mppUtil.c  -o mppUtil.o
/bin/bash ./genfileattributes bld
gcc  -I/root/linuxrdac-09.03.0C05.0638 -I/root/linuxrdac-09.03.0C05.0638/mpp_linux_headers/ -I/root/linuxrdac-09.03.0C05.0638/mpp_linux_sys_headers/ -c ./utility/mppUtil26p_sysdep.c -o mppUtilSysdep.o
gcc mppUtil.o mppUtilSysdep.o -o mppUtil
gcc -o genuniqueid genuniqueid.c
[root@node1 linuxrdac-09.03.0C05.0638]# make install
make[1]: Entering directory `/root/linuxrdac-09.03.0C05.0638'
make[2]: Entering directory `/usr/src/kernels/2.6.18-194.el5-x86_64'
  Building modules, stage 2.
  MODPOST
make[2]: Leaving directory `/usr/src/kernels/2.6.18-194.el5-x86_64'
make[2]: Entering directory `/usr/src/kernels/2.6.18-194.el5-x86_64'
  Building modules, stage 2.
  MODPOST
make[2]: Leaving directory `/usr/src/kernels/2.6.18-194.el5-x86_64'
/bin/bash ./genfileattributes bld
gcc  -I/root/linuxrdac-09.03.0C05.0638 -I/root/linuxrdac-09.03.0C05.0638/mpp_linux_headers/ -I/root/linuxrdac-09.03.0C05.0638/mpp_linux_sys_headers/ -c ./utility/mppUtil26p_sysdep.c -o mppUtilSysdep.o
gcc mppUtil.o mppUtilSysdep.o -o mppUtil
Checking Host Adapter Configuration...
Detected 1 Emulex Host Adapter Port(s) on the system
Please wait while we modify the system configuration files.
Your kernel version is 2.6.18-194.el5
Preparing to install MPP driver against this kernel version...
Generating module dependencies...
Creating new MPP initrd image...
        You must now edit your boot loader configuration file, /boot/grub/menu.lst, to 
        add a new boot menu, which uses mpp-2.6.18-194.el5.img as the initrd image.
        Now Reboot the system for MPP to take effect.
        The new boot menu entry should look something like this (note that it may 
        vary with different system configuration):

        ...

                title Red Hat Linux (2.6.18-194.el5) with MPP support
                root (hd0,5)
                kernel /vmlinuz-2.6.18-194.el5 ro root=LABEL=RH9
                initrd /mpp-2.6.18-194.el5.img
        ...
MPP driver package has been successfully installed on your system.
make[1]: Leaving directory `/root/linuxrdac-09.03.0C05.0638'
[root@node1 linuxrdac-09.03.0C05.0638]#

 

     4 问题得以解决,也深知自己存在的问题!送给自己一句话以自勉:革命尚未成功,同志仍需努力!

遇到ORA-600 [kmgs_parameter_update_timeout_1] [1565]的错误 续

刚发完上篇文章之后,又从老杨的一篇文章“http://blog.itpub.net/post/468/450451?SelectActiveLayout=a”中看到一些有所帮助的信息,而这个案例中在alert里看到,在这个600的错误抛出之前一条错误信息中看到:

Wed Jul 25 09:56:46  2012
Thread 1 advanced to log sequence 580 (LGWR switch)
  Current log# 1 seq# 580 mem# 0: +DATA/zhfr8db/onlinelog/group_1.271.783424599
  Current log# 1 seq# 580 mem# 1: +FLASH/zhfr8db/onlinelog/group_1.256.783424601
Wed Jul 25 10:49:39  2012
Unexpected communication failure with ASM instance:
 error 21561 (ORA-21561: 生成 OID 失败
)

的错误信息。

从后续的alert日志里,尝试关闭数据库服务器的时候抛出的错误中也有类似错误:

Wed Jul 25 10:55:03  2012
Trace dumping is performing id=[cdmp_20120725105503]
Wed Jul 25 10:56:06  2012
Restarting dead background process MMON
MMON started with pid=33, OS id=6756
Wed Jul 25 12:46:20  2012
Unexpected communication failure with ASM instance:
 error 21561 (ORA-21561: 生成 OID 失败
)
NOTE: ASMB process state dumped to trace file c:\oracle\product\10.2.0\admin\zhfr8db\udump\zhfr8db1_ora_5552.trc
Wed Jul 25 12:47:03  2012
Unexpected communication failure with ASM instance:
 error 21561 (ORA-21561: 生成 OID 失败
)
NOTE: ASMB process state dumped to trace file c:\oracle\product\10.2.0\admin\zhfr8db\udump\zhfr8db1_ora_2008.trc
Wed Jul 25 12:50:01  2012
Unexpected communication failure with ASM instance:
 error 21561 (ORA-21561: 生成 OID 失败
)

再从c:\oracle\product\10.2.0\admin\zhfr8db\udump\zhfr8db1_ora_5552.trc文件跟踪看到:

*** 2012-07-25 12:46:20.268
*** CLIENT ID:() 2012-07-25 12:46:20.268
      ----------------------------------------
      SO: 000000047111DEF0, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
      (process) Oracle pid=31, calls cur/top: 000000047E15DC20/000000047E15DC20, flag: (6) SYSTEM
                int error: 0, call error: 0, sess error: 0, txn error 0
  (post info) last post received: 0 0 33
              last post received-location: ksrpublish
              last process to post me: 7e11e6f8 1 6
              last post sent: 849 0 4
              last post sent-location: kslpsr
              last process posted by me: 7312aef8 1 6
        (latch info) wait_event=0 bits=0
        Process Group: DEFAULT, pseudo proc: 00000004731384B8
        O/S info: user: SYSTEM, term: DATACENTER01, ospid: 4828 
        OSD pid info: Windows thread id: 4828, image: ORACLE.EXE (ASMB)
        Short stack dump: 
ksdxfstk+42<-ksdxcb+1630<-ssthreadsrgruncallback+589<-OracleOradebugThreadStart+975<-0000000077D6B71A
<-0000000077EF047A<-0000000077DA79F3<-0000000008653328<-000000000865190C<-0000000005F564A9
<-0000000005F0CF64<-0000000005EE5D88<-0000000005EE57F9<-0000000005EA5ECB<-ttcdrv+14881
<-0000000005EAAA6D<-xupirtrc+1335<-xupirtr+216<-upirtr+23<-kpurcs+45
<-OCIKDispatch+32<-kfnOpExecute+146<-kfnbRun+1062<-ksbrdp+988<-opirip+700
<-opidrv+856<-sou2o+52<-opimai_real+268<-opimai+96<-BackgroundThreadStart+637<-0000000077D6B71A
        ----------------------------------------
        SO: 000000047114E1A0, type: 4, owner: 000000047111DEF0, flag: INIT/-/-/0x00
        (session) sid: 189 trans: 0000000000000000, creator: 000000047111DEF0, flag: (51) USR/- BSY/-/-/-/-/-
                  DID: 0001-001F-00000003, short-term DID: 0000-0000-00000000
                  txn branch: 0000000000000000
                  oct: 0, prv: 0, sql: 0000000000000000, psql: 0000000000000000, user: 0/SYS
        service name: SYS$BACKGROUND
        waiting for 'ASM background timer' wait_time=0, seconds since wait started=420165
                    =0, =0, =0
                    blocking sess=0x0000000000000000 seq=31
        Dumping Session Wait History
         for 'ASM background timer' count=1 wait_time=4.999949 sec
                    =0, =0, =0
         for 'ASM background timer' count=1 wait_time=4.999893 sec
                    =0, =0, =0
         for 'ASM background timer' count=1 wait_time=5.000022 sec
                    =0, =0, =0
         for 'ASM background timer' count=1 wait_time=4.999948 sec
                    =0, =0, =0
         for 'ASM background timer' count=1 wait_time=4.999924 sec
                    =0, =0, =0
         for 'ASM background timer' count=1 wait_time=5.000012 sec
                    =0, =0, =0
         for 'ASM background timer' count=1 wait_time=4.999948 sec
                    =0, =0, =0
         for 'ASM background timer' count=1 wait_time=4.999858 sec
                    =0, =0, =0
         for 'ASM background timer' count=1 wait_time=4.999991 sec
                    =0, =0, =0
         for 'ASM background timer' count=1 wait_time=5.000000 sec
                    =0, =0, =0
        Sampled Session History of session 189 serial 1
        ---------------------------------------------------
        The sampled session history is constructed by sampling
        the target session every 1 second. The sampling process
        captures at each sample if the session is in a non-idle wait,
        an idle wait, or not in a wait. If the session is in a
        non-idle wait then one interval is shown for all the samples
        the session was in the same non-idle wait. If the
        session is in an idle wait or not in a wait for
        consecutive samples then one interval is shown for all
        the consecutive samples. Though we display these consecutive
        samples  in a single interval the session may NOT be continuously
        idle or not in a wait (the sampling process does not know).
 
        The history is displayed in reverse chronological order.
 
        sample interval: 1 sec, max history 120 sec
        ---------------------------------------------------
          [121 samples,                                    12:44:20 - 12:46:20]
            idle wait at each sample
        temporary object counter: 0
          KTU Session Commit Cache Dump for IDLs: 
          KTU Session Commit Cache Dump for Non-IDLs: 
          ----------------------------------------
          UOL used : 0 locks(used=0, free=0)
          KGX Atomic Operation Log 000000047AECC840
           Mutex 0000000000000000(0, 0) idn 0 oper NONE
           Cursor Pin uid 189 efd 3 whr 11 slp 0
          KGX Atomic Operation Log 000000047AECC888
           Mutex 0000000000000000(0, 0) idn 0 oper NONE
           Library Cache uid 189 efd 0 whr 0 slp 0
          KGX Atomic Operation Log 000000047AECC8D0
           Mutex 0000000000000000(0, 0) idn 0 oper NONE
           Library Cache uid 189 efd 0 whr 0 slp 0
          ----------------------------------------
          SO: 000000045A233D80, type: 41, owner: 000000047114E1A0, flag: INIT/-/-/0x00
          (dummy) nxc=0, nlb=0   
        ----------------------------------------
        SO: 0000000472172A40, type: 11, owner: 000000047111DEF0, flag: INIT/-/-/0x00
        (broadcast handle) flag: (2) ACTIVE SUBSCRIBER, owner: 000000047111DEF0,
                           event: 31, last message event: 31,
                           last message waited event: 31,                            next message: 0000000476225BC8(0), messages read: 0
                           channel: (00000004711640E0) system events broadcast channel
                                    scope: 2, event: 30690, last mesage event: 7413,
                                    publishers/subscribers: 1/45,
                                    messages published: 3
                                    oldest msg (?): 0000000476225BB8 id: 1 pub: 000000047E11F768
                                    heuristic msg queue length: 3
        ----------------------------------------
        SO: 0000000472270FA0, type: 19, owner: 000000047111DEF0, flag: INIT/-/-/0x00
         GES MSG BUFFERS: st=emp chunk=0x0000000000000000 hdr=0x0000000000000000 lnk=0x0000000000000000 flags=0x0 inc=0
          outq=0 sndq=0 opid=0 prmb=0x0 
          mbg[i]=(0 0) mbg[b]=(0 0) mbg[r]=(0 0)
          fmq[i]=(0 0) fmq[b]=(0 0) fmq[r]=(0 0)
          mop[s]=0 mop[q]=0 pendq=0 zmbq=0
          nonksxp_recvs=0
        ------------process 0x0000000472270FA0--------------------
        proc version      : 0
        Local node        : 0
        pid               : 4828
        lkp_node          : 0
        svr_mode          : 0
        proc state        : KJP_FROZEN
        Last drm hb acked : 0
        Total accesses    : 3
        Imm.  accesses    : 0
        Locks on ASTQ     : 0
        Locks Pending AST : 0
        Granted locks     : 0
        AST_Q: 
        PENDING_Q: 
        GRANTED_Q: 
        ----------------------------------------
        SO: 000000047E15DC20, type: 3, owner: 000000047111DEF0, flag: INIT/-/-/0x00
        (call) sess: cur 7114e1a0, rec 0, usr 7114e1a0; depth: 0
          ----------------------------------------
          SO: 000000045ED23770, type: 84, owner: 000000047E15DC20, flag: INIT/-/-/0x00
          (kfgso) flags: 00000000 clt: 3 err: 0 hint: 0
          (kfgpn) rpi: 1 itrn:0000000000000000 gst:0000000000000000 usrp:0000000000000000
          busy: 0 rep: 0 grp: 5d60b840 check: 0/0 glink: 5d60b888 5d60b888
            kfgrp:  number: 0/0 type: 0 compat: 0.0.0.0.0 dbcompat:0.0.0.0.0
            timestamp: 0 state: 0 flags: 2 gpnlist: 5ed237f0 5ed237f0
            KFGPN at 5ed23770 in dependent chain
        ----------------------------------------
        SO: 000000045AEEDD48, type: 16, owner: 000000047111DEF0, flag: INIT/-/-/0x00
        (osp req holder)
PSO child state object changes :
Dump of memory from 0x0000000474167DC0 to 0x0000000474167FC8
474167DC0 00000005 00000000 5AEEDD48 00000004  [........H..Z....]
474167DD0 00000010 000313F4 7E15DC20 00000004  [........ ..~....]
474167DE0 00000003 000313F4 72270FA0 00000004  [..........'r....]
474167DF0 00000013 000312CB 72172A40 00000004  [........@*.r....]
474167E00 0000000B 000313F4 7114E1A0 00000004  [...........q....]
474167E10 00000004 000312CB 00000000 00000000  [................]
474167E20 00000000 00000000 00000000 00000000  [................]
        Repeat 25 times
474167FC0 00000000 00000000                    [........]        
*** 2012-07-25 12:46:37.393
*** CLIENT ID:() 2012-07-25 12:46:37.393
WARNING:Could not lower the asynch I/O limit to 256 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 320 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 256 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 256 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 288 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 224 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 256 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 192 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 160 for SQL direct I/O. It is set to -1 
*** 2012-07-25 12:47:20.314
*** CLIENT ID:() 2012-07-25 12:47:20.314
WARNING:Could not lower the asynch I/O limit to 256 for SQL direct I/O. It is set to -1

难道问题是:

WARNING:Could not lower the asynch I/O limit to 256 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 320 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 256 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 256 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 288 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 224 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 256 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 192 for SQL direct I/O. It is set to -1 
WARNING:Could not lower the asynch I/O limit to 160 for SQL direct I/O. It is set to -1

导致的,这些WARNING又说明什么呢?

遇到ORA-600 [kmgs_parameter_update_timeout_1] [1565]的错误

           今天上午,在一套Windows 2003 64位的双节点10.2.0.5.0的64位RAC数据库上,遇到一则600的错误,ORA-00600: 内部错误代码, 参数: [kmgs_parameter_update_timeout_1], [1565], [], [], [], [], [], []。从(节点1的)alert日志里获取的具体错误信息如下:

Wed Jul 25 10:49:39  2012
Unexpected communication failure with ASM instance:
 error 21561 (ORA-21561: 生成 OID 失败
)
NOTE: ASMB process state dumped to trace file c:\oracle\product\10.2.0\admin\zhfr8db\bdump\zhfr8db1_mmon_4624.trc
System State dumped to trace file c:\oracle\product\10.2.0\admin\zhfr8db\bdump\zhfr8db1_mmon_4624.trc
Wed Jul 25 10:55:02  2012
Errors in file c:\oracle\product\10.2.0\admin\zhfr8db\bdump\zhfr8db1_mmon_4624.trc:
ORA-00600: 内部错误代码, 参数: [kmgs_parameter_update_timeout_1], [1565], [], [], [], [], [], []
ORA-01565: 标识文件 '+DATA/zhfr8db/spfilezhfr8db.ora' 时出错
ORA-17503: ksfdopn: 2 未能打开文件 +DATA/zhfr8db/spfilezhfr8db.ora
ORA-21561: 生成 OID 失败

Wed Jul 25 10:55:03  2012
Trace dumping is performing id=[cdmp_20120725105503]
Wed Jul 25 10:56:06  2012
Restarting dead background process MMON
MMON started with pid=33, OS id=6756

           当时的表现情况是,客户端无法通过应用系统访问数据库,客户端通过tnsping service_name的返回结果也是,有时通,返回10毫秒,而有时挂死了。

           登录到其中的一个节点服务器(节点1)上,执行lsnrctl status,查看监听状态也挂死,而到另外一个节点(节点2)上,执行lsnrctl status一切正常。并且,分别在两个节点上,通过SQL*PLUS连接数据库均正常,执行crs_stat -t返回的结果也都正常。

           接下来,分别在两个节点上作了一个AWR报告,分析了AWR发现并没有发现数据库的异常现象。

           从告警日志里看到跟SPFILE相关的错误,于是在SQL*PLUS里尝试create pfile from spfile,以及使用RMAN工具backup spfile时,都报错了,当时匆忙,具体错误号没有记录下来。

           没辙了,就到节点1上,尝试通过srvctl stop database -d db_unique_name关闭数据库时,挂死了,丝毫没有反应;通过SQL*PLUS在节点1上,尝试shutdown immediate来手工关闭实例,也未果。

           当时,从网络上找到老杨以及惜分飞的文章,不过,貌似都和我遇到的这个情况不太类似。

           然后,在没有更好的办法的前提下,直接重启节点1的Windows服务器,毕竟当时应用已经不可用了,再者是RAC架构,最起码还留着一个实例。重启之后,数据库一切正常了。

           最后,从MetaLink上找到这样的一篇文章:ORA-600 [kmgs_parameter_update_timeout_1], [1565] While Accessing Spfile Managed By ASM [ID 553915.1]该文档描述,该错误影响到10.2及以上版本的数据库,原因是:

This is due to unpublished bug 5399699 where ORA-600 [kmgs_parameter_update_timeout_1] or similar errors can occur in MMON when ASM is being used.

In 10g MMON manages all memory re-size activity by modifying related parameters. If MMON is not running DBW0 will handle this task. The parameter update activity is triggered by a timeout. Basically this error indicates that the MMON process is not able to write to the SPFILE to store some settings required for dynamic SGA parameter adjustments.

           也就是说:这个错误是oracle还未发布的一个bug,bug号是5399699。这个错误是说在10g的数据库里,MMON进程动态的管理内存,MMON(Memory Monitor)是10g数据库的新进程。从10g开始,数据库支持自动调整SGA内存,当需要调整(动态增大或减小)的时候,MMON进程会自动完成,MMON会把这个改变的信息,写入到SPFILE里。

       结合最开始的alert日志文件的内容,是由于当时MMON进程无法把这个信息写入到SPFILE里导致的,导致后来MMON后台进程僵死了,在Wed Jul 25 10:56:06  2012的时候,MMON进程又被重启了。然后数据库一直处于“假死”的状态。

       Oracle Metalink上这篇文章给的解决方案:

Solution

1.  Upgrade to the 10.2.0.4.4 PSU Patch:9352164 or higher where this bug is fixed.

OR

2. Check if Patch:5399699 is available for your RDBMS release and platform.

OR

3.  Use one of the following workarounds:

  • Relocate the spfile either to some other diskgroup on which the archive logs are not being managed.
  • Move the spfile to the file system

          显然,第1个与当前环境下的数据库版本不一致;而第2个方案中,又没有查到Windows 2003 X64平台下的补丁;第3个方案,我当前是RAC的库,如果把SPFILE迁移到文件系统上的话,又不太合适,除非给每个实例单独配置PFILE。

          最后,这个问题,对于我这个Oracle菜鸟而言,依然头痛,没有更好的解决方案,如果大家有遇到过类似的案例的话,请不吝赐教! 

 

记录一次在IBM P750小机上给Oracle动态扩展存储

           本文详细描述一次在IBM P750的小机上动态给Oracle数据库扩展存储空间的操作。

           背景描述:一套2台IBM P750的小机通过HACMP做的HA,上面跑的是Oracle 11gR2的单实例数据库,除小机自带两块300G本地存储之外,共享存储采用的是IBM DS 5100,做RAID 10之后,可用空间2.1TB。目前该机器上有两个VG:rootvg,datavg。其中,rootvg存放AIX操作系统,由本机自带两块盘提供物理卷,datavg给oracle数据库用,物理卷是阵列上的磁盘。其中,datavg下的逻辑卷/oradata用于存放数据库的数据文件、联机日志文件、控制文件等;/oraflash主要用于存放归档日志和RMAN备份。

           1 添加之前,查看当前文件系统使用信息:

$ df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           1.00      0.78   23%    10542     6% /
/dev/hd2          10.00      7.48   26%    52002     3% /usr
/dev/hd9var        5.00      4.45   12%     8742     1% /var
/dev/hd3          10.00      7.22   28%      729     1% /tmp
/dev/hd1           0.50      0.49    2%      135     1% /home
/dev/hd11admin      0.50      0.50    1%        5     1% /admin
/proc                 -         -    -         -     -  /proc
/dev/hd10opt       0.50      0.23   55%    10267    16% /opt
/dev/livedump      0.50      0.50    1%        4     1% /var/adm/ras/livedump
/dev/oracle      100.00     80.71   20%   357020     2% /u01
/dev/oradata     500.00    135.74   73%       39     1% /oradata
/dev/oraflash    500.00     58.62   89%     1186     1% /oraflash
$

           从上看到,挂载在/oraflash下的/dev/oraflash文件系统大小是500G,可用空间剩余58G,挂载在/oradata下的/dev/oradata文件系统大小是500G,剩余空间是135G,因业务量迅速增长,现需要扩充存储空间。接下来准备扩充文件系统/oradata和/oraflash,准备分别扩充300G。     

           2 扩之前,查看VG信息:

$ lsvg -o
datavg
rootvg
$ 

           看到,当前varyon的卷组是rootvg和datavg。

           3 查看datavg的物理卷信息:

$ lsvg -p datavg
datavg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            1599        0           00..00..00..00..00
hdisk3            active            1599        0           00..00..00..00..00
hdisk4            active            1199        397         00..00..00..157..240
hdisk5            active            1599        1598        320..319..319..320..320
hdisk6            active            1599        1599        320..320..319..320..320
hdisk7            active            1599        0           00..00..00..00..00
hdisk8            active            1599        0           00..00..00..00..00
hdisk9            active            1599        797         157..00..00..320..320
hdisk10           active            1599        1599        320..320..319..320..320
hdisk11           active            1599        1599        320..320..319..320..320
$ 

             4 查看datavg下的逻辑卷信息:

$ lsvg -l datavg
datavg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
oradata             jfs2       4000    4000    3    open/syncd    /oradata
loglv01             jfs2log    1       1       1    open/syncd    N/A
oraflash            jfs2       4000    4000    3    open/syncd    /oraflash
$

              从上,看到oradata,oraflash两个逻辑卷都位于datavg卷组下。

              5 接下来,查看datavg的详细信息:

$ lsvg datavg
VOLUME GROUP:       datavg                   VG IDENTIFIER:  00f64a5100004c000000012d5e49cf72
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      15590 (1995520 megabytes)
MAX LVs:            256                      FREE PPs:       7589 (971392 megabytes)
LVs:                3                        USED PPs:       8001 (1024128 megabytes)
OPEN LVs:           3                        QUORUM:         6 (Enabled)
TOTAL PVs:          10                       VG DESCRIPTORS: 10
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         10                       AUTO ON:        no
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
PV RESTRICTION:     none
$

           从上看到,datavg卷组的PP size=128M,Totol PPs=15590,意味着卷组的总大小=128*15590M=1948G,已用8001个PPs(1000G),可用PPs 7589个(948G)。说明,卷组上还有空间可供使用。

           6 查看逻辑卷oraflash的信息:

$ lslv oraflash
LOGICAL VOLUME:     oraflash               VOLUME GROUP:   datavg
LV IDENTIFIER:      00f64a5100004c000000012d5e49cf72.3 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            4000                   PP SIZE:        128 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                4000                   PPs:            4000
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       minimum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    1024
MOUNT POINT:        /oraflash              LABEL:          /oraflash
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     NO
DEVICESUBTYPE : DS_LVZ
COPY 1 MIRROR POOL: None
COPY 2 MIRROR POOL: None
COPY 3 MIRROR POOL: None
$

             这里,看到oraflash逻辑卷MAX LPs、LPs、PPs都是4000,说明如果要直接扩充文件系统的话,扩充之后的文件系统Max LPs不能超过4000,否则就得先扩逻辑卷oraflash了。oraflash文件系统类型是jfs2。

             7 顺便查看oraflash占用哪几个物理卷的信息:

$ lslv -l oraflash
oraflash:/oraflash
PV                COPIES        IN BAND       DISTRIBUTION
hdisk7            1599:000:000  20%           320:320:319:320:320
hdisk8            1599:000:000  20%           320:320:319:320:320
hdisk9            802:000:000   39%           163:320:319:000:000
$

             8 尝试直接使用smitty直接扩充oraflash文件系统,尝试增加100G,即从现有的500G扩充到600G:

             从上看到,oraflash文件系统类型是jfs2,直接以root执行smitty chjfs2命令,进入smitty界面:

 

      然后,选择/oraflash,回车,进入下一操作界面:

 

      将Unit Size选择为G,Number of units输入600,表示600G大小,即该文件系统的扩充目标大小。回车,进入下图:

             

               发现报错,提示逻辑卷oraflash的超出最大4000个Lps。扩充失败。看来得先修改逻辑卷oraflash的属性了。

               9   接下来,先修改逻辑卷oraflash的最大Lps,这里准备增加2400个,从目前的4000个Lps增加到6400个。2400*128=300G,这个需要事先计算好。

                    root用户执行smitty chlv,进入下述操作界面:

         选择第一项,Change a Logical Volume,然后选择对应的oraflash,如下图:

 

        回车,进入下一界面:

 

        然后,修改MAXIMUM NUMBER of LOGICAL PARTITIONS值为6400,改完之后,直接回车,进入下一界面:

       提示成功,状态OK。执行Esc+0退出。

            10  再次对/oraflash文件系统进行扩充。依次以root执行smitty chjfs2命令,选择/oraflash文件系统,同样将Unit Size选择为G,Number of units输入800,表示800G大小,即该文件系统的扩充目标大小。这里,因为上述我们已经将逻辑卷oraflash的最大Lps,增加2400个Lps,2400Lps*128M/Lps=300G,所以我们的目标大小是800G。如下图所示:

               

                 修改之后,直接回车。发现最后扩充成功了。

                 如法炮制,通过smit chlv修改oradata逻辑卷的MAX LPs为6400个之后,然后,执行smit chjfs2选择修改/oradata文件系统,扩展到800G。

                 最终,修改之后,文件系统使用信息如下:

# df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           1.00      0.78   23%    10542     6% /
/dev/hd2          10.00      7.48   26%    52002     3% /usr
/dev/hd9var        5.00      4.45   12%     8742     1% /var
/dev/hd3          10.00      7.22   28%      729     1% /tmp
/dev/hd1           0.50      0.49    2%      135     1% /home
/dev/hd11admin      0.50      0.50    1%        5     1% /admin
/proc                 -         -    -         -     -  /proc
/dev/hd10opt       0.50      0.23   55%    10267    16% /opt
/dev/livedump      0.50      0.50    1%        4     1% /var/adm/ras/livedump
/dev/oracle      100.00     80.71   20%   357030     2% /u01
/dev/oradata     800.00    435.70   46%       39     1% /oradata
/dev/oraflash    800.00    358.54   56%     1187     1% /oraflash
#

                  看到/oradata和/oraflash已从500G扩充到800G。同时,看到datavg的FREE PPS变少了,从之前的7589减少到2789,减少了7589-2789=4800个,正好是分别往oradata和oraflash上加的2400个:

# lsvg datavg
VOLUME GROUP:       datavg                   VG IDENTIFIER:  00f64a5100004c000000012d5e49cf72
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      15590 (1995520 megabytes)
MAX LVs:            256                      FREE PPs:       2789 (356992 megabytes)
LVs:                3                        USED PPs:       12801 (1638528 megabytes)
OPEN LVs:           3                        QUORUM:         6 (Enabled)
TOTAL PVs:          10                       VG DESCRIPTORS: 10
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         10                       AUTO ON:        no
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
PV RESTRICTION:     none
#

                  并且,oraflash和oradata的PPs都从4000增加到6400:

# lsvg -l datavg
datavg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
oradata             jfs2       6400    6400    5    open/syncd    /oradata
loglv01             jfs2log    1       1       1    open/syncd    N/A
oraflash            jfs2       6400    6400    5    open/syncd    /oraflash
#

                    而此时,oraflash、oradata占用物理卷信息也改变了:

# lslv -l oradata
oradata:/oradata
PV                COPIES        IN BAND       DISTRIBUTION
hdisk2            1599:000:000  20%           320:320:319:320:320
hdisk3            1599:000:000  20%           320:320:319:320:320
hdisk4            1199:000:000  20%           240:240:239:240:240
hdisk11           1599:000:000  20%           320:320:319:320:320
hdisk5            404:000:000   78%           000:319:085:000:000
# lslv -l oraflash
oraflash:/oraflash
PV                COPIES        IN BAND       DISTRIBUTION
hdisk7            1599:000:000  20%           320:320:319:320:320
hdisk8            1599:000:000  20%           320:320:319:320:320
hdisk9            1599:000:000  20%           320:320:319:320:320
hdisk6            1599:000:000  20%           320:320:319:320:320
hdisk10           004:000:000   100%          000:004:000:000:000
#

                  最终,完成在IBM P750小机上在Oracle数据库正常运行的前提下,动态给Oracle添加存储。