如何在PostgreSQL中使用partial index来优化SQL?

一 案例背景

从生产控制台上看到一条下述SQL:

格式化之后SQL语句为:

select '01' status, count(1) from t_ai_prd_item where status = '01' and deleted = false	
union all 
select '02' status, count(1) from t_ai_prd_item where status = '02' and deleted = false

一个union all的联合查询,每次执行耗时1秒。有没有优化余地呢?

 二 优化分析

下述通过copy原表t_ai_prd_item为t1,来进行分析优化。

1 看SQL执行计划及数据量分布:

ai=> explain analyze select '01' status, count(1) from t1 where status = '01' and deleted = false 
ai-> union all 
ai-> select '02' status, count(1) from t1 where status = '02' and deleted = false;
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Append  (cost=160483.13..320974.20 rows=2 width=40) (actual time=5211.374..6250.940 rows=2 loops=1)
   ->  Aggregate  (cost=160483.13..160483.14 rows=1 width=0) (actual time=5211.374..5211.374 rows=1 loops=1)
         ->  Seq Scan on t1  (cost=0.00..160483.12 rows=1 width=0) (actual time=5211.369..5211.369 rows=0 loops=1)
               Filter: ((NOT deleted) AND ((status)::text = '01'::text))
               Rows Removed by Filter: 1395189
   ->  Aggregate  (cost=160491.03..160491.04 rows=1 width=0) (actual time=1039.563..1039.563 rows=1 loops=1)
         ->  Seq Scan on t1 t1_1  (cost=0.00..160483.12 rows=3163 width=0) (actual time=901.577..1039.307 rows=2835 loops=1)
               Filter: ((NOT deleted) AND ((status)::text = '02'::text))
               Rows Removed by Filter: 1392354
 Planning time: 0.417 ms
 Execution time: 6251.024 ms
(11 rows)

ai=> select count(*) from t_ai_prd_item;                       
  count  
---------
 1395189
(1 row)

ai=> select count(*),status from t_ai_prd_item group by status;
  count  | status 
---------+--------
     364 | 04
      25 | 05
    2835 | 02
 1391965 | 06
(4 rows)

ai=>

分析发现,表1中有1395189条记录,status=’01’的记录为0,status=’02’的记录为2835条。

2 那么在status字段上建立一个btree index,效果会怎么样呢?

ai=> create index idx_status on t1(status);
CREATE INDEX
ai=> explain analyze select '01' status, count(1) from t1 where status = '01' and deleted = false 
ai-> union all 
ai-> select '02' status, count(1) from t1 where status = '02' and deleted = false;
                                                               QUERY PLAN                                                               
----------------------------------------------------------------------------------------------------------------------------------------
 Append  (cost=8.45..730.24 rows=2 width=40) (actual time=0.037..3.999 rows=2 loops=1)
   ->  Aggregate  (cost=8.45..8.46 rows=1 width=0) (actual time=0.036..0.036 rows=1 loops=1)
         ->  Index Scan using idx_status on t1  (cost=0.43..8.45 rows=1 width=0) (actual time=0.033..0.033 rows=0 loops=1)
               Index Cond: ((status)::text = '01'::text)
               Filter: (NOT deleted)
   ->  Aggregate  (cost=721.76..721.77 rows=1 width=0) (actual time=3.962..3.962 rows=1 loops=1)
         ->  Index Scan using idx_status on t1 t1_1  (cost=0.43..714.10 rows=3063 width=0) (actual time=0.029..3.673 rows=2835 loops=1)
               Index Cond: ((status)::text = '02'::text)
               Filter: (NOT deleted)
 Planning time: 0.291 ms
 Execution time: 4.067 ms
(11 rows)

ai=>

效果不错,但是结合到实际业务场景,此处只是针对status=’01’和status=’02’的情况来做统计。那么有没有更好的方法来解决这个场景呢?

3 创建1个partial index来测试

ai=> create index idx_partial on t1(status) where (status='01' or status='02');
CREATE INDEX
ai=> explain analyze select '01' status, count(1) from t1 where status = '01' and deleted = false 
ai-> union all 
ai-> select '02' status, count(1) from t1 where status = '02' and deleted = false;
                                                               QUERY PLAN                                                               
----------------------------------------------------------------------------------------------------------------------------------------
 Append  (cost=8.30..732.19 rows=2 width=40) (actual time=0.019..3.916 rows=2 loops=1)
   ->  Aggregate  (cost=8.30..8.31 rows=1 width=0) (actual time=0.018..0.018 rows=1 loops=1)
         ->  Index Scan using idx_partial on t1  (cost=0.28..8.30 rows=1 width=0) (actual time=0.015..0.015 rows=0 loops=1)
               Index Cond: ((status)::text = '01'::text)
               Filter: (NOT deleted)
   ->  Aggregate  (cost=723.85..723.86 rows=1 width=0) (actual time=3.897..3.897 rows=1 loops=1)
         ->  Index Scan using idx_partial on t1 t1_1  (cost=0.28..716.20 rows=3063 width=0) (actual time=0.030..3.599 rows=2835 loops=1)
               Index Cond: ((status)::text = '02'::text)
               Filter: (NOT deleted)
 Planning time: 0.234 ms
 Execution time: 3.992 ms
(11 rows)

ai=>

对比,发现SQL的执行效率几乎没有差别。但是:

ai=> \di+ idx_status 
                         List of relations
 Schema |    Name    | Type  | Owner | Table | Size  | Description 
--------+------------+-------+-------+-------+-------+-------------
 ai     | idx_status | index | ai    | t1    | 30 MB | 
(1 row)

ai=> \di+ idx_partial 
                         List of relations
 Schema |    Name     | Type  | Owner | Table | Size  | Description 
--------+-------------+-------+-------+-------+-------+-------------
 ai     | idx_partial | index | ai    | t1    | 80 kB | 
(1 row)

ai=>

4 小结:

在确保SQL执行效率的同时,这个partial index所占的存储空间是b-tree index的1/384,大大降低了存储空间的开销。

三 关于partial index

1 什么是partial index?

分区索引是基于表中部分数据建立的,被索引的部分数据是原数据的一个子集。这个子集数据是基于某些条件可以明确计算出来的。

2 partial index适用场景?

对于表中数据分布“不均匀”的场景,比如某些数据占据绝大多数,而其它一些数据只占少量,且要满足查询条件针是对那些小数据量的查询情形。此例正好满足这种情况,总数据量为140万左右,而状态为01和02的数据只占极少的比例,且查询是针对状态为01和02的查询。

通过partial index可以排除大量普通数据或业务上“不感兴趣”的数据。如:一个在线订单系统,可以针对那些不在经常访问的客户端IP范围之外的IP进行创建分区索引,或者针对已下单还未支付的订单进行分区索引的创建。这样,当查询那些不在常用IP范围内的订单,或者那些已下单未支付的订单信息时,可以保证查询效率。

3 partial index的优势?

由于其是针对部分数据进行索引,显然同传统b-tree索引相比,其所占磁盘空间将大大缩小。当重建维护这些分区索引时,速度也非常快。

4 参考

partial index官方文档地址:
https://www.postgresql.org/docs/9.3/static/indexes-partial.html

PostgreSQL中hash索引的小结

一 关于PostgreSQL中使用hash index的小结如下:

  1. 只适合于等值查询;
  2. 不受WAL-logged保护(pre-PG 10),如果数据库崩溃,则需要重建该类型索引;
  3. 索引上的改变不会经由streaming或file-based复制到备库;
  4. 不支持在多列上创建联合hash index;
  5. 通常,不建议使用hash index。

二 PostgreSQL中使用hash index导致的一则错误案例

 
### Error updating database.  Cause: org.postgresql.util.PSQLException: ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes
### The error may involve com.onlyou.platform.form.entity.FormDetailEntityMapper.updateByPrimaryKeySelective-Inline
### The error occurred while setting parameters
### SQL: UPDATE t_onlyou_form_detail SET form_id=?,label=?,base_type=?,is_show=?,sort=?,create_ts=?,create_user_id=?,update_ts=?,update_user_id=? WHERE (id=?)
### Cause: org.postgresql.util.PSQLException: ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes
; uncategorized SQLException for SQL []; SQL state [XX001]; error code [0]; ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes; nested exception is org.postgresql.util.PSQLException: ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes

原因是数据库配置了主备结构,并且期间执行过切换。在新的主库(原备库)上,发现了上述错误,经排查该表上有1个hash index,解决办法就是重建该hash index,或者改为其它类型的index。

三 小结

这是之前,一个项目在阿里云生产环境RDS上遇到的错误。

ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes;很有可能把我们指向错误的方向,误以为数据库存储是不是出问题,或者数据块上出现了错误?

索引扫描路径之3:Index Skip Scan

Index_Skip_Scan

在前面的两篇中,分别描述了Index Unique ScanIndex Range Scan,今天来描述Index Skip Scan。

 所谓的Index Skip Scan就是指,查询的WHERE条件中“SKIP”【跳过,没带上】了复合索引的前导列,只是指定了复合索引的其它列,这样CBO选择Index Skip Scan的路径来访问数据的方式。

如果复合索引的前导列唯一值越少(选择性低),非前导列索引唯一值越多(选择性高),则Index Skip Scan的性能越优。

事实上,在处理的过程中,Oracle将该复合索引上分为若干个logical subindexes(逻辑子索引),逻辑子索引的具体个数取决于前导列的唯一值值个数,即前导列有几个重复值就分为几个逻辑子索引。

Index Skip Scan适用的场景:

1 Index一定是一个复合索引(索引类型可以是UNIQUE的唯一索引,也可以是NONUIQUE的非唯一索引);
2 且复合索引的前导列重复值很多(即唯一值少),非前导列的重复值很少(唯一值多);
3 WHERE限制条件中,并没有带上该符合索引的前导列,仅仅是使用非前导列来过滤数据;

测试与验证:

1 创建测试表:

SQL> conn hr/hr
Connected.
SQL> create table skip_t as select object_id,object_name,object_type from all_objects;

Table created.

SQL> select count(*) from skip_t;

  COUNT(*)
----------
     55683

SQL> select count(distinct object_type) from skip_t;

COUNT(DISTINCTOBJECT_TYPE)
--------------------------
                        25

SQL> select count(distinct object_id) from skip_t;

COUNT(DISTINCTOBJECT_ID)
------------------------
                   55683

SQL> select count(distinct object_name) from skip_t;

COUNT(DISTINCTOBJECT_NAME)
--------------------------
                     30902

SQL>

从上,看到skip_t表数据情况:有55683条记录,且OBJECT_ID字段均唯一,无重复值。OBJECT_TYPE唯一值只有25个。
2 接下来,在OBJECT_TYPE和OBJECT_ID字段上创建复合索引,并收集统计信息:

SQL> create index idx_skip_t on skip_t(object_type,object_id);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'skip_t',cascade=>true);

PL/SQL procedure successfully completed.

SQL>

3 “Skip”掉复合索引前导列的情况,即让CBO按照预期的选择Index Skip Scan的路径来访问数据:

SQL> set autotrace trace
SQL> select object_type,object_id from skip_t where object_id=30000;

Execution Plan
----------------------------------------------------------
Plan hash value: 2119292092

-------------------------------------------------------------------------------
| Id  | Operation        | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |            |     1 |    15 |    26   (0)| 00:00:01 |
|*  1 |  INDEX SKIP SCAN | IDX_SKIP_T |     1 |    15 |    26   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("OBJECT_ID"=30000)
       filter("OBJECT_ID"=30000)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         14  consistent gets
          0  physical reads
          0  redo size
        611  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> select object_type,object_name from skip_t where object_id=30000;

Execution Plan
----------------------------------------------------------
Plan hash value: 1237951313

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |     1 |    40 |    27   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| SKIP_T     |     1 |    40 |    27   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | IDX_SKIP_T |     1 |       |    26   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=30000)
       filter("OBJECT_ID"=30000)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         15  consistent gets
          0  physical reads
          0  redo size
        640  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

4 使用索引前导列的情况:

SQL> select object_type,object_id from skip_t where object_type='TABLE';

136 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3840590361

-------------------------------------------------------------------------------
| Id  | Operation        | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |            |  2227 | 33405 |     9   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| IDX_SKIP_T |  2227 | 33405 |     9   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("OBJECT_TYPE"='TABLE')

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         12  consistent gets
          0  physical reads
          0  redo size
       3592  bytes sent via SQL*Net to client
        623  bytes received via SQL*Net from client
         11  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        136  rows processed

SQL> select object_type,object_name from skip_t where object_type='TABLE';

136 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1483414757

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |  2227 | 80172 |    43   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| SKIP_T     |  2227 | 80172 |    43   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | IDX_SKIP_T |  2227 |       |     9   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_TYPE"='TABLE')

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         40  consistent gets
          0  physical reads
          0  redo size
       6671  bytes sent via SQL*Net to client
        623  bytes received via SQL*Net from client
         11  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        136  rows processed

SQL>

索引扫描路径之2:Index Range Scan

index_range_scan

在上一篇文章,索引扫描路径之1:Index Unique Scan 中,详细描述了Oracle的CBO优化器会在何种情形下选择通过Index Unique Scan的方式来访问数据。

本篇描述CBO优化器会在什么场景下选择Index Range Scan的路径来访问数据?

Index Range Scan通常见于通过索引去查找具有高选择性的数据。默认情况下,Oracle是按照被索引字段升序排序来存放索引记录的,如果被索引字段有重复值,则按照相应的ROWID做升序排序来存放。

较为常见的Index Range ScanWHERE条件如下:

Col1=:b1

Col1>:b1

Col1>=:b1

Col1<:b1

Col1<=:b1

或者是通过AND连接的满足上述可构成前导列的条件。再或者WHERE条件中有使用BETWEEN…AND条件。

同样,给出验证与说明:

系统版本、数据库版本和平台,以及测试表同与之前的环境。即选择Linux X86_64平台上的一套11.2.0.1.0库,使用HR这个schema下的departments表。

1 唯一索引,CBO选择Index Range Scan的场景

SQL> set autotrace trace
SQL> select * from departments where department_id>260;

Execution Plan
----------------------------------------------------------
Plan hash value: 3346631158

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |     1 |    20 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPARTMENTS |     1 |    20 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | DEPT_ID_PK  |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DEPARTMENT_ID">260)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
        743  bytes sent via SQL*Net to client
        492  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

从上,即使是从唯一索引中获取一条数据[满足department_id>260条件的记录,确实只有1条],优化器也选择了走INDEX RANGE SCAN,因为上述SQL并不满足Index Unique Scan的条件。恰恰满足了INDEX RANGE SCAN 的条件。

2 接下来,一个典型的使用INDEX RANGE SCAN 的场景:

SQL> select * from departments where location_id=1700;

21 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 735461860

------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                  |    21 |   420 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPARTMENTS      |    21 |   420 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | DEPT_LOCATION_IX |    21 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("LOCATION_ID"=1700)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          6  consistent gets
          1  physical reads
          0  redo size
       1468  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         21  rows processed

SQL>

或者:

SQL> select * from departments where location_id<=1700;

23 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 735461860

------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                  |    24 |   480 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPARTMENTS      |    24 |   480 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | DEPT_LOCATION_IX |    24 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("LOCATION_ID"<=1700)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
       1502  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         23  rows processed

SQL>

再或者:

SQL> select * from departments where location_id>=1700;

25 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 735461860

------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                  |    25 |   500 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPARTMENTS      |    25 |   500 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | DEPT_LOCATION_IX |    25 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("LOCATION_ID">=1700)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
       1561  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         25  rows processed

SQL>

可以看到,不管是location_id>=1700还是location_id=1700或者是location_id<=1700条件,CBO都选择了location_id字段上的索引DEPT_LOCATION_IX通过INDEX RANGE SCAN 的方式去获取数据。

3 接下来,BETWEEN…AND的场景:

SQL> select department_id,location_id,rowid from departments where location_id between 1500 and 2600;

25 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 735461860

------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                  |    26 |   494 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPARTMENTS      |    26 |   494 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | DEPT_LOCATION_IX |    26 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

     2 - access("LOCATION_ID">=1500 AND "LOCATION_ID"<=2600)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
       1556  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         25  rows processed

SQL>

那么,如果是通过department_id字段来做BETWEEN…AND条件测试呢?即测试在一个唯一性索引上使用BETWEEN…AND条件的情形。

SQL> select department_id,location_id,rowid from departments where department_id between 120 and 130;

Execution Plan
----------------------------------------------------------
Plan hash value: 3346631158

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |     3 |    57 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPARTMENTS |     3 |    57 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | DEPT_ID_PK  |     3 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DEPARTMENT_ID">=120 AND "DEPARTMENT_ID"<=130)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        748  bytes sent via SQL*Net to client
        492  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          2  rows processed

SQL>

测试结果发现,即使是唯一索引,CBO依然会选择INDEX RANGE SCAN。
反之,如果条件换做是department_id between 120 and 120的话,很明显,聪明的CBO这次肯定会选择Index Unique Scan。

SQL> select department_id,location_id,rowid from departments where department_id between 120 and 120;

Execution Plan
----------------------------------------------------------
Plan hash value: 4024094692

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |     1 |    19 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPARTMENTS |     1 |    19 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | DEPT_ID_PK  |     1 |       |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DEPARTMENT_ID"=120)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
        673  bytes sent via SQL*Net to client
        492  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

4 最后,看一个关于INDEX RANGE SCAN DESCENDING的场景:

CBO选择INDEX RANGE SCAN DESCENDING的方式来获取数据,适用于 ORDER BY column_name DESC的场景。在默认情况下,索引是按照索引关键字升序来存放数据的。

按照常理来讲,如果有ORDER BY column_name DESC的条件,那么

① 需要先从索引中读取数据;

② 再按照条件中column_name字段做降序排序。

而选择INDEX RANGE SCAN DESCENDING,则是直接在索引上按照索引关键字降序查找数据,这样正是为了避免先按照索引来查找数据,然后再做一次降序排序的操作。

SQL> set autotrace on
SQL> select department_id,location_id,rowid from departments where location_id between 1500 and 2600 order by location_id desc;

DEPARTMENT_ID LOCATION_ID ROWID
------------- ----------- ------------------
           80        2500 AAAz+PABRAAAACbAAH
           40        2400 AAAz+PABRAAAACbAAD
           20        1800 AAAz+PABRAAAACbAAB
          270        1700 AAAz+PABRAAAACbAAa
          260        1700 AAAz+PABRAAAACbAAZ
          250        1700 AAAz+PABRAAAACbAAY
          240        1700 AAAz+PABRAAAACbAAX
          230        1700 AAAz+PABRAAAACbAAW
          220        1700 AAAz+PABRAAAACbAAV
          210        1700 AAAz+PABRAAAACbAAU
          200        1700 AAAz+PABRAAAACbAAT

DEPARTMENT_ID LOCATION_ID ROWID
------------- ----------- ------------------
          190        1700 AAAz+PABRAAAACbAAS
          180        1700 AAAz+PABRAAAACbAAR
          170        1700 AAAz+PABRAAAACbAAQ
          160        1700 AAAz+PABRAAAACbAAP
          150        1700 AAAz+PABRAAAACbAAO
          140        1700 AAAz+PABRAAAACbAAN
          130        1700 AAAz+PABRAAAACbAAM
          120        1700 AAAz+PABRAAAACbAAL
          110        1700 AAAz+PABRAAAACbAAK
          100        1700 AAAz+PABRAAAACbAAJ
           90        1700 AAAz+PABRAAAACbAAI

DEPARTMENT_ID LOCATION_ID ROWID
------------- ----------- ------------------
           30        1700 AAAz+PABRAAAACbAAC
           10        1700 AAAz+PABRAAAACbAAA
           50        1500 AAAz+PABRAAAACbAAE

25 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2689299823

-------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |    26 |   494 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | DEPARTMENTS      |    26 |   494 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN DESCENDING| DEPT_LOCATION_IX |    26 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("LOCATION_ID">=1500 AND "LOCATION_ID"<=2600)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
       1575  bytes sent via SQL*Net to client
        534  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         25  rows processed

SQL>

从上可以看到,对于LOCATION_ID=1700的记录,则正是按照ROWID做的降序排序操作。

索引扫描路径之1:Index Unique Scan

index_unique_scan

近期在读一本名为《Expert Indexing in Oracle Database 11g》的书,个人觉得写的很不错,收获颇多。在这里就把一些关于CBO 通过INDEX 扫描数据的相关知识稍作整理,算是读书笔记。

其中,在读到第7章,Tuning Index Usage时,提到常见的通过索引访问数据的方式有下述几种:

Index Unique Scan;

Index Range Scan;

Index Skip Scan;

Index Full Scan;

Index Fast Full Scan;

        这里,首先说Index Unique Scan。

CBO【Cost Based Optimizer】通常在什么场景下会选择通过Index Unique Scan的方式来访问数据呢?

1 必须是通过唯一索引【UNIQUE】来访问数据;

2 通过唯一索引来访问数据时,每次返回的记录数必须是1条;

3 WHERE从句中必须要用等值【=】条件来过滤数据;

        我个人目前的理解是,必须同时满足上述3个条件,CBO才会选择通过Index Unique Scan的路径来访问数据。

下面给出验证:

1 操作系统版本和数据库版本:
操作系统:

[oracle@oracle11g ~]$ uname -rm
2.6.18-194.el5 x86_64
[oracle@oracle11g ~]$

数据库:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL>

2 测试表、表结构、表数据如下:

SQL> conn hr/hr
Connected.
SQL> desc departments;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 DEPARTMENT_ID                             NOT NULL NUMBER(4)
 DEPARTMENT_NAME                           NOT NULL VARCHAR2(30)
 MANAGER_ID                                         NUMBER(6)
 LOCATION_ID                                        NUMBER(4)

SQL> select * from departments;

DEPARTMENT_ID DEPARTMENT_NAME                MANAGER_ID LOCATION_ID
------------- ------------------------------ ---------- -----------
           10 Administration                        200        1700
           20 Marketing                             201        1800
           30 Purchasing                            114        1700
           40 Human Resources                       203        2400
           50 Shipping                              121        1500
           60 IT                                    103        1400
           70 Public Relations                      204        2700
           80 Sales                                 145        2500
           90 Executive                             100        1700
          100 Finance                               108        1700
          110 Accounting                            205        1700

DEPARTMENT_ID DEPARTMENT_NAME                MANAGER_ID LOCATION_ID
------------- ------------------------------ ---------- -----------
          120 Treasury                                         1700
          130 Corporate Tax                                    1700
          140 Control And Credit                               1700
          150 Shareholder Services                             1700
          160 Benefits                                         1700
          170 Manufacturing                                    1700
          180 Construction                                     1700
          190 Contracting                                      1700
          200 Operations                                       1700
          210 IT Support                                       1700
          220 NOC                                              1700

DEPARTMENT_ID DEPARTMENT_NAME                MANAGER_ID LOCATION_ID
------------- ------------------------------ ---------- -----------
          230 IT Helpdesk                                      1700
          240 Government Sales                                 1700
          250 Retail Sales                                     1700
          260 Recruiting                                       1700
          270 Payroll                                          1700

27 rows selected.

SQL>

3 该表上的索引信息如下:

SQL> select index_name,index_type,table_name,uniqueness from user_indexes where table_name='DEPARTMENTS';

INDEX_NAME                     INDEX_TYPE                  TABLE_NAME                     UNIQUENES
------------------------------ --------------------------- ------------------------------ ---------
DEPT_ID_PK                     NORMAL                      DEPARTMENTS                    UNIQUE
DEPT_LOCATION_IX               NORMAL                      DEPARTMENTS                    NONUNIQUE

SQL> select * from user_ind_columns where table_name='DEPARTMENTS';

INDEX_NAME                     TABLE_NAME                     COLUMN_NAME     COLUMN_POSITION COLUMN_LENGTH CHAR_LENGTH DESC
------------------------------ ------------------------------ --------------- --------------- ------------- ----------- ----
DEPT_ID_PK                     DEPARTMENTS                    DEPARTMENT_ID                 1            22           0 ASC
DEPT_LOCATION_IX               DEPARTMENTS                    LOCATION_ID                   1            22           0 ASC

SQL>

从上看到departments表有2个索引,其中在DEPARTMENT_ID字段上的索引DEPT_ID_PK 为主键索引,是个UNIQUE类型的索引。而LOCATION_ID 字段上的DEPT_LOCATION_IX则为非唯一【NONUNIQUE】索引。

4 通过该表上的主键字段department_id来定位数据:

SQL> set autotrace trace
SQL> select department_id from departments where department_id=270;

Execution Plan
----------------------------------------------------------
Plan hash value: 1856623209

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |     1 |     4 |     0   (0)| 00:00:01 |
|*  1 |  INDEX UNIQUE SCAN| DEPT_ID_PK |     1 |     4 |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("DEPARTMENT_ID"=270)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        532  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

从上述执行计划里看到,CBO选择了Index Unique Scan,因为department_id字段上有一个DEPT_ID_PK 的UNIQUE类型的索引,且上述SQL满足Index Unique Scan的条件。

而如果WHERE条件换成department_id>=270时,则CBO选择了Index Range Scan的访问路径,因为该WHERE条件是>=而非等值匹配。CBO认为返回的记录数可能不止1条,即使实际情况下满足条件的记录数确实是1条,所以选择了Index Range Scan。

SQL> select department_id from departments where department_id>=270;

Execution Plan
----------------------------------------------------------
Plan hash value: 4271874676

-------------------------------------------------------------------------------
| Id  | Operation        | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |            |     1 |     4 |     1   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| DEPT_ID_PK |     1 |     4 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("DEPARTMENT_ID">=270)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        532  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

5 通过该表上的location_id字段来定位数据的场景:

SQL> select location_id from departments where location_id=2700;

Execution Plan
----------------------------------------------------------
Plan hash value: 2790119751

-------------------------------------------------------------------------------------
| Id  | Operation        | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                  |     1 |     3 |     1   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| DEPT_LOCATION_IX |     1 |     3 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("LOCATION_ID"=2700)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        529  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> select location_id from departments where location_id=1700;

21 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2790119751

-------------------------------------------------------------------------------------
| Id  | Operation        | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                  |    21 |    63 |     1   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| DEPT_LOCATION_IX |    21 |    63 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("LOCATION_ID"=1700)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        901  bytes sent via SQL*Net to client
        535  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         21  rows processed

SQL>

从上述执行计划看到,通过LOCATION_ID 字段上的DEPT_LOCATION_IX来定位数据时,不管是通过location_id=2700还是location_id=1400或者location_id=1700,结果都是走INDEX RANGE SCAN。因为DEPT_LOCATION_IX不是唯一索引,即使location_id=2700或者location_id=1400条件能确保返回的记录数只有1条,CBO也不会选择Index Unique Scan。

6  通过唯一索引、非唯一索引来定位数据的场景:

非唯一索引的情况:

SQL> create table all_t as select object_id,object_name from all_objects;

Table created.

SQL> desc all_t;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OBJECT_ID                                 NOT NULL NUMBER
 OBJECT_NAME                               NOT NULL VARCHAR2(30)

SQL> select table_name,index_name,index_type,uniqueness from user_indexes where table_name='ALL_T';

no rows selected

SQL> create index idx_all_t on all_t(object_id);

Index created.

SQL> select table_name,index_name,index_type,uniqueness from user_indexes where table_name='ALL_T';

TABLE_NAME                     INDEX_NAME                     INDEX_TYPE                  UNIQUENES
------------------------------ ------------------------------ --------------------------- ---------
ALL_T                          IDX_ALL_T                      NORMAL                      NONUNIQUE

SQL> exec dbms_stats.gather_table_stats(user,'all_t',cascade=>true);

PL/SQL procedure successfully completed.

SQL> set autotrace trace
SQL> select * from all_t where object_id=30000;

Execution Plan
----------------------------------------------------------
Plan hash value: 2459308467

-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |     1 |    31 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| ALL_T     |     1 |    31 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | IDX_ALL_T |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=30000)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        627  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

唯一索引的情况:

SQL> drop index idx_all_t ;

Index dropped.

SQL> create unique index idx_all_t on all_t(object_id);

Index created.

SQL> select table_name,index_name,index_type,uniqueness from user_indexes where table_name='ALL_T';

TABLE_NAME                     INDEX_NAME                     INDEX_TYPE                  UNIQUENES
------------------------------ ------------------------------ --------------------------- ---------
ALL_T                          IDX_ALL_T                      NORMAL                      UNIQUE

SQL> set autotrace trace
SQL> exec dbms_stats.gather_table_stats(user,'all_t',cascade=>true);

PL/SQL procedure successfully completed.

SQL> select * from all_t where object_id=30000;

Execution Plan
----------------------------------------------------------
Plan hash value: 3932529197

-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |     1 |    31 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| ALL_T     |     1 |    31 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | IDX_ALL_T |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=30000)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        491  bytes sent via SQL*Net to client
        512  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

从上可以看到,虽然测试表all_t上的字段object_id字段虽然的确可以确保唯一,但是在第一种情况下,索引IDX_ALL_T为非唯一索引的时候,CBO毅然选择了INDEX RANGE SCAN,而第二种情况下,却是Index Unique Scan。

也就是说,即使可以确保表中某个字段是唯一的,如果该字段上的索引不是唯一的,那么CBO选择的将是通过INDEX RANGE SCAN的路径来访问数据。想要表达的是,真实场景下这种情况的确会存在,因为有些系统是通过应用程序来保证表中的字段唯一,而并没有在表上对应的字段上通过创建唯一约束或者是唯一索引来保证数据的唯一性。那么,这样的话,将会导致CBO在选择执行计划的情况下,有可能会出现偏差。所以,对于这种情况下,即使应用程序可以保证数据的唯一性,最好还是在表上创建一个唯一索引比较稳妥。

最后,留一个小疑问给所有的看到这篇文章的朋友,对于第6种情况,是不是第二种场景下的Index Unique Scan的效率要高于第一种场景下的INDEX RANGE SCAN?因为,从执行计划来看的话,一致性读的次数分别为3和4。

解决一则enq: TX – row lock contention的性能故障

            上周二早上,收到项目组的一封邮件:

       早上联代以下时间点用户有反馈EDI导入“假死”,我们跟踪了EDI导入服务,服务是正常在跑,可能是处理的慢所以用户感觉是“假死”了,请帮忙从数据库中检查跟踪以下时间点是否有“异常”操作,多谢!

        2012-11-20 9:10:10~~~~9:55:13,这个时间点内一共反馈了3次,大概是10~20分钟“假死”一次,请帮忙跟踪检查,多谢!

        这是一套Windows RAC的环境,也是之前处理  解决一则row cache lock引起的性能故障 那套环境。下面记录一下处理的经过:
1 对这一个小时进行AWR的收集和分析,首先,从报告头中看到DB Time达到近500分钟,(DB Time)/Elapsed=8,这个比值偏高:
  Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 15142 20-11月-12 09:00:05 62 5.8
End Snap: 15143 20-11月-12 10:00:56 74 8.3
Elapsed:   60.85 (mins)    
DB Time:   492.88 (mins)    
 
2 再看TOP 5事件:
看到排在第一位的是enq: TX – row lock contention事件,也就是说系统中在这一个小时里产生了较为严重的行级锁等待事件。

Top 5 Timed Events

Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time   21,215   71.7  
enq: TX – row lock contention 12,232 6,013 492 20.3 Application
gc cr multi block request 14,696,067 1,675 0 5.7 Cluster
gc buffer busy 441,472 719 2 2.4 Cluster
db file sequential read 4,191 25 6 .1 User I/O
 
 
通常,产生enq: TX – row lock contention事件的原因有以下几种可能:
  • 不同的session更新或删除同一条记录;
  • 唯一索引有重复索引;
  • 位图索引同时被更新或同时并发的向位图索引字段上插入相同字段值;
  • 并发的对同一个数据块上的数据进行update操作;
  • 等待索引块完成分裂

同时,从段的统计信息章节中,也看到下面的信息:
Segments by Row Lock Waits
  • % of Capture shows % of row lock waits for each top segment compared
  • with total row lock waits for all segments captured by the Snapshot
Owner Tablespace Name Object Name Subobject Name Obj. Type Row Lock Waits % of Capture
SUNISCO SUNISCO_DATA1 BIND_PROCESS_LOG_REFNO   INDEX 159 67.66
SUNISCO FDN_EDI_I01 IDX_EDI_WORK_QUEUE_1   INDEX 29 12.34
SUNISCO SUNISCO_DATA1 IND_EDI_CUSTOMER_TYPE_CODE   INDEX 15 6.38
SUNISCO SUNISCO_DATA1 IDX_EDI_MESSAGE_1   INDEX 14 5.96
SUNISCO FDN_BASE_T01 BSE_NUM_LIST   TABLE 6 2.55



看到row lock waits发生在一个索引上。

3 那么,究竟是什么操作导致了这个enq: TX – row lock contention等待事件呢? 查看系统中,当前有哪些会话产生了enq: TX – row lock contention等待事件

SQL> select event,sid,p1,p2,p3 from v$session_wait where event='enq: TX - row lock contention';
 
EVENT                                                                   SID         P1         P2         P3
---------------------------------------------------------------- ---------- ---------- ---------- ----------
enq: TX - row lock contention                                           224 1415053316    1441815     144197
enq: TX - row lock contention                                           238 1415053316    1441815     144197
enq: TX - row lock contention                                           247 1415053316    1441815     144197
enq: TX - row lock contention                                           248 1415053316    1441815     144197
enq: TX - row lock contention                                           253 1415053316    1441815     144197
SQL> 

看到SID为224,238,247,248,253的会话产生enq: TX – row lock contention等待事件。

4 查看系统中的当前会话,是在哪个对象上产生了产生了enq: TX – row lock contention等待事件

SQL> select ROW_WAIT_OBJ#,ROW_WAIT_FILE#,ROW_WAIT_BLOCK#,ROW_WAIT_ROW# from v$session where event='enq: TX - row lock contention';
 
ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
------------- -------------- --------------- -------------
       369195              0               0             0
       369195              0               0             0
       369195              0               0             0
       369195              0               0             0
       369195              0               0             0
       369195              0               0             0
 
6 rows selected
SQL>


5 那么这个数据库对象为369195的对象究竟是什么呢?

SQL> select object_name,object_id from dba_objects where object_id=369195;
 
OBJECT_NAME                          OBJECT_ID
----------------------------------- ----------
BIND_PROCESS_LOG_REFNO                  369195
SQL> select OWNER,OBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID, OBJECT_TYPE from dba_objects where object_name='BIND_PROCESS_LOG_REFNO';
 
OWNER                          OBJECT_NAME                    OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE
------------------------------ ----------------------------- ---------- -------------- -------------------
SUNISCO                        BIND_PROCESS_LOG_REFNO            369195         369195 INDEX
SQL>

可以看到,定位到的结果同上述AWR报告中段统计信息吻合,是SUNISCO这个用户下的一个索引。

6 接下来,继续看看SID为224,238,247,248,253的会话到底在执行哪些操作导致enq: TX – row lock contention等待事件

SQL> select sid,sql_text from v$session a,v$sql b where sid in(224,238,247,248,253) and (b.sql_id=a.sql_id or b.sql_id=a.prev_sql_id);
 
       SID SQL_TEXT
---------- --------------------------------------------------------------------------------
       224 select count(1)  from EDI_MESSAGE_PROCESS_LOG where   (LOG_ID =  :P_0_0  )
       224 INSERT INTO EDI_MESSAGE_PROCESS_LOG(LOG_ID, SERVICE_TYPE, SERVICE_STATUS, INFO_C
       238 select count(1)  from EDI_MESSAGE_PROCESS_LOG where   (LOG_ID =  :P_0_0  )
       238 INSERT INTO EDI_MESSAGE_PROCESS_LOG(LOG_ID, SERVICE_TYPE, SERVICE_STATUS, INFO_C
       247 INSERT INTO EDI_MESSAGE_PROCESS_LOG(LOG_ID, SERVICE_TYPE, SERVICE_STATUS, REFNO,
       247 INSERT INTO EDI_MESSAGE_PROCESS_LOG(LOG_ID, SERVICE_TYPE, SERVICE_STATUS, REFNO,
       248 INSERT INTO EDI_MESSAGE_PROCESS_LOG (LOG_ID, SERVICE_TYPE, SERVICE_STATUS, REFNO
       248 INSERT INTO EDI_MESSAGE_PROCESS_LOG (LOG_ID, SERVICE_TYPE, SERVICE_STATUS, REFNO
       248 SELECT SEQ_NEWID.NEXTVAL FROM DUAL
       253 SELECT SEQ_NEWID.NEXTVAL FROM DUAL
       253 INSERT INTO EDI_MESSAGE_PROCESS_LOG (LOG_ID, SERVICE_TYPE, SERVICE_STATUS, REFNO
 
11 rows selected
 
SQL>

看到有SQL_ID不同的SQL在同时向EDI_MESSAGE_PROCESS_LOG这张表执行INSERT操作

7 接下去看看EDI_MESSAGE_PROCESS_LOG这张表和索引BIND_PROCESS_LOG_REFNO之间有没有什么关系?

SQL> select index_name,table_name,index_type from user_indexes where table_name='EDI_MESSAGE_PROCESS_LOG';

INDEX_NAME                     TABLE_NAME                     INDEX_TYPE
------------------------------ ------------------------------ ----------
PK_EDI_MESSAGE_PROCESS_LOG     EDI_MESSAGE_PROCESS_LOG        NORMAL
ID_EDI_LOG_INPUT_DATE          EDI_MESSAGE_PROCESS_LOG        NORMAL
BIND_PROCESS_LOG_REFNO         EDI_MESSAGE_PROCESS_LOG        BITMAP

SQL> select index_name,table_name,column_name from user_ind_columns where table_name='EDI_MESSAGE_PROCESS_LOG';

INDEX_NAME                     TABLE_NAME                     COLUMN_NAM
------------------------------ ------------------------------ ----------
PK_EDI_MESSAGE_PROCESS_LOG     EDI_MESSAGE_PROCESS_LOG        LOG_ID
ID_EDI_LOG_INPUT_DATE          EDI_MESSAGE_PROCESS_LOG        INPUT_DATE
BIND_PROCESS_LOG_REFNO         EDI_MESSAGE_PROCESS_LOG        REFNO

SQL> select object_name,object_id,object_type,created from user_objects where object_name='BIND_PROCESS_LOG_REFNO';

OBJECT_NAME                     OBJECT_ID OBJECT_TYPE     CREATED
------------------------------ ---------- --------------- -------------------
BIND_PROCESS_LOG_REFNO             369195 INDEX           2012/11/05 10:18:28

SQL> select index_name,index_type from user_indexes where index_name='BIND_PROCESS_LOG_REFNO';

INDEX_NAME                      INDEX_TYPE
------------------------------- -----------
BIND_PROCESS_LOG_REFNO          BITMAP

SQL>

发现,这个索引BIND_PROCESS_LOG_REFNO是位于EDI_MESSAGE_PROCESS_LOG这张表的REFNO字段上的一个位图索引,而且是2012/11/05 10:18:28创建的,也就是说是近期才创建的1个位图索引

问题定位到这一步基本比较清晰了,产生enq: TX – row lock contention事件的原因就是上述的第2个可能原因:位图索引同时被更新或同时并发的向位图索引字段上插入相同字段值

8 那么,解决的办法也比较简单了,就是干掉这个位图索引,因为这个位图索引在这种应用场景下确实不太适合。事后,经过同客户方沟通确认,该索引是他们的一个DBA当初看到系统比较慢,而加上去的一个位图索引。

9 补充,从当时的ADDM报告中,也可以看到数据库给我们的建议:

FINDING 4: 20% impact (6013 seconds)
------------------------------------
发现 SQL 语句正处于行锁定等待。

   RECOMMENDATION 1: Application Analysis, 17% benefit (5131 seconds)
      ACTION: 在 INDEX "SUNISCO.BIND_PROCESS_LOG_REFNO" (对象 ID 为 369195)
         中检测到了严重的行争用。使用指定的阻塞 SQL 语句在应用程序逻辑中跟踪行争
用的起因。
         RELEVANT OBJECT: database object with id 369195
      RATIONALE: SQL_ID 为 "dr4uxu769tmmb" 的 SQL 语句在行锁上被阻塞。
         RELEVANT OBJECT: SQL statement with SQL_ID dr4uxu769tmmb
         INSERT INTO EDI_MESSAGE_PROCESS_LOG(LOG_ID, SERVICE_TYPE,
         SERVICE_STATUS, LOG_DATETIME, REFNO, REF_TYPE, MSG_ID, BL_NO, BL_ID,
         VOYAGE_ID, VESSEL_NAME, IMO_NO, VOYAGE_NO, FUNCTION_TYPE, INPUT_DATE,
         IN_STATUS, SYSTEM_TYPE, ERROR_LOG, FILE_NAME) VALUES ( :B1 , :B2 ,
         :B3 , :B4 , :B5 , :B6 , :B7 , :B8 , :B9 , :B10 , :B11 , :B12 , :B13 ,
         :B14 , :B15 , :B16 , :B17 , :B18 , :B19 )
      RATIONALE: SQL_ID 为 "dxsbgubsb6r4n" 的 SQL 语句在行锁上被阻塞。
         RELEVANT OBJECT: SQL statement with SQL_ID dxsbgubsb6r4n
         INSERT INTO EDI_MESSAGE_PROCESS_LOG(LOG_ID, SERVICE_TYPE,
         SERVICE_STATUS, INFO_CODE, INFORMATION, INFO_LEVEL, LOG_DATETIME,
         REFNO, REF_TYPE, MSG_ID, BL_NO, VOYAGE_ID, VESSEL_NAME, IMO_NO,
         VOYAGE_NO, FUNCTION_TYPE, INPUT_DATE, IN_STATUS, SYSTEM_TYPE,
         ERROR_LOG, FILE_NAME) VALUES ( :B1 , :B2 , :B3 , :B4 , :B5 , :B6 ,
         :B7 , :B8 , :B9 , :B10 , :B11 , :B12 , :B13 , :B14 , :B15 , :B16 ,
         :B17 , :B18 , :B19 , :B20 , :B21 )
      RATIONALE: SQL_ID 为 "b38qhyzvn5bdd" 的 SQL 语句在行锁上被阻塞。
         RELEVANT OBJECT: SQL statement with SQL_ID b38qhyzvn5bdd
         INSERT INTO EDI_MESSAGE_PROCESS_LOG(LOG_ID, SERVICE_TYPE,
         SERVICE_STATUS, LOG_DATETIME, REFNO, REF_TYPE, MSG_ID, BL_NO,
         VOYAGE_ID, VESSEL_NAME, IMO_NO, VOYAGE_NO, FUNCTION_TYPE, INPUT_DATE,
         IN_STATUS, SYSTEM_TYPE, ERROR_LOG, FILE_NAME) VALUES ( :B1 , :B2 ,
         :B3 , :B4 , :B5 , :B6 , :B7 , :B8 , :B9 , :B10 , :B11 , :B12 , :B13 ,
         :B14 , :B15 , :B16 , :B17 , :B18 )
      RATIONALE: SQL_ID 为 "36k2xpx3c6wr5" 的 SQL 语句在行锁上被阻塞。
         RELEVANT OBJECT: SQL statement with SQL_ID 36k2xpx3c6wr5
         INSERT INTO EDI_MESSAGE_PROCESS_LOG(LOG_ID, SERVICE_TYPE,
         SERVICE_STATUS, REFNO, REF_TYPE, MSG_ID, BL_NO, VOYAGE_ID,
         VESSEL_NAME, IMO_NO, VOYAGE_NO, FUNCTION_TYPE, INPUT_DATE, IN_STATUS,
         SYSTEM_TYPE, ERROR_LOG, FILE_NAME) VALUES ( :B1 , :B2 , :B3 , :B4 ,
         :B5 , :B6 , :B7 , :B8 , :B9 , :B10 , :B11 , :B12 , :B13 , :B14 , :B15
         , :B16 , :B17 )
      RATIONALE: 具有 ID "268", 用户 ID "31", 程序 "FC.EdiService.Import.exe" 和
模块
         "FC.EdiService.Import.exe" 的会话是构成此建议案中的优化建议的 51% 的阻
塞会话。
      RATIONALE: 具有 ID "307", 用户 ID "31", 程序 "FC.EdiService.Import.exe" 和
模块
         "FC.EdiService.Import.exe" 的会话是构成此建议案中的优化建议的 11% 的阻
塞会话。
      RATIONALE: 具有 ID "227", 用户 ID "31", 程序 "FC.EdiService.Import.exe" 和
模块
         "FC.EdiService.Import.exe" 的会话是构成此建议案中的优化建议的 11% 的阻
塞会话。
      RATIONALE: 具有 ID "273", 用户 ID "31", 程序 "FC.EdiService.Import.exe" 和
模块
         "FC.EdiService.Import.exe" 的会话是构成此建议案中的优化建议的 9% 的阻塞
会话。


10 最后,从本案例中,可以看到在日常的数据库维护中,添加或修改一些对象信息时,务必要经过严格的测试,尤其是在生产系统上做调整更应如此。同样,可以看出,数据库的一些理论基础知识对于DBA还是蛮重要的。

 

Oracle索引组织表学习系列三(完)

5 索引组织表(IOT Index-Organized Table)的键压缩

      在创建IOT表时,有一个存储选项NOCOMPRESS,这个选项的意思是不压缩,即对IOT表的索引条目不压缩。该选项不单对IOT表有效,同样对B-Tree索引也适用。

      与NOCOMPRESS选项对应的就是COMPRESS N,这个N只能取值整数,表示要压缩的列数,在数据块儿级别提取"公因子",用以避免存储重复值。如IOT表的主键是(col1,col2,col3)的联合主键的话,那么COMPRESS 2就可以表示,如果在表中重复出现多个col1,col2的重复值的话,那么Oracle就只存储col1,col2 1次,对于重复的记录不再存储。所以,达到压缩的目的。

      接下来,看一个示例:

      1 创建一个NOCOMPRESS的IOT表:

[oracle@oracle11g ~]$ sqlplus 

SQL*Plus: Release 11.2.0.1.0 Production on Fri Nov 23 14:21:43 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning and Automatic Storage Management options

SQL> conn hr/hr
Connected.
SQL> create table iot
  2  ( owner, object_type, object_name,
  3  constraint iot_pk primary key(owner,object_type,object_name)
  4  )
  5  organization index
  6  NOCOMPRESS
  7  as
  8  select distinct owner, object_type, object_name
  9  from all_objects;

Table created.

SQL> select count(*) from iot;

  COUNT(*)
----------
     55646

SQL> select lf_blks, br_blks, used_space, opt_cmpr_count, opt_cmpr_pctsave from index_stats;

no rows selected

SQL> 

 

      看到表里有55646条记录。

      2 用ANALYZE INDEX VALID STRUCTURE来分析这个IOT表的主键:

SQL> analyze index iot_pk validate structure;

Index analyzed.

SQL> select lf_blks, br_blks, used_space, opt_cmpr_count, opt_cmpr_pctsave from index_stats;

   LF_BLKS    BR_BLKS USED_SPACE OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
---------- ---------- ---------- -------------- ----------------
       347          3    2493786              2               33

SQL> 

 

      看到这个IOT表的索引使用了347个叶子块,3个分枝块,占用空间大小是2493786字节,约2.4M。

      3 使用COMPRESS 1选项来重建IOT表:

SQL> alter table iot move compress 1;

Table altered.

SQL> select lf_blks, br_blks, used_space, opt_cmpr_count, opt_cmpr_pctsave from index_stats;

no rows selected

SQL> analyze index iot_pk validate structure;

Index analyzed.

SQL> select lf_blks, br_blks, used_space, opt_cmpr_count, opt_cmpr_pctsave from index_stats;

   LF_BLKS    BR_BLKS USED_SPACE OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
---------- ---------- ---------- -------------- ----------------
       304          3    2178229              2               23

SQL> 

 

         然后,看到这个IOT表的索引现在使用了304个叶子块,3个分枝块,占用空间大小是2178229字节,约2.1M。

      4 使用COMPRESS 2继续压缩这个IOT表:

SQL> alter table iot move compress 2;

Table altered.

SQL> select lf_blks, br_blks, used_space, opt_cmpr_count, opt_cmpr_pctsave from index_stats;

no rows selected

SQL> analyze index iot_pk validate structure;

Index analyzed.

SQL> select lf_blks, br_blks, used_space, opt_cmpr_count, opt_cmpr_pctsave from index_stats;

   LF_BLKS    BR_BLKS USED_SPACE OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
---------- ---------- ---------- -------------- ----------------
       233          1    1670153              2                0

SQL> 

 

      最终,看到这个IOT表的索引现在使用了233个叶子块,1个分枝块,占用空间大小是1670153字节,约1.6M。

      说明:

      从上可以看出,使用不同的压缩级别,索引的占用的空间大小逐渐变小,从最初的2.4M变成1.6M。叶子块从最初的347变成233个,分枝块最终只有1个。

      对于上述分析索引之后的OPT_CMPR_COUNT字段表示最优压缩数,即对于该索引而言如果使用COMPRESS 2来压缩的话,可以得到最优的压缩效果。

       同时OPT_CMPR_PCTSAVE字段值则表示最优压缩节省的空间百分比,针对第一次分析之后的结果,可以看到OPT_CMPR_PCTSAVE为33,即可以节省大约33%的空间。如下,压缩后1670153大约是压缩前2493786的2/3,即节约了1/3的空间:

SQL> select 1670153/2493786 from dual;

1670153/2493786
---------------
     .669725871

SQL>

 

6 索引组织表(IOT Index-Organized Table)的INCLUDING和PCTTHRESHOLD参数选择

     对于IOT表而言,如果在建表的语句中同时使用了INCLUDING和PCTTHRESHOLD参数的话,那么PCTTHRESHOLD参数的级别较高。

       PCTTHRESHOLD:行中的数据量超过数据块的这个百分比时,行中其它的列则存放到OVERFLOW段中;

       INCLUDING:行中从第一列直到INCLUDING子句中指定的列(包括该列)都存放在索引块中,其它的列存放到OVERFLOW段中。

       即,对于大小为8K的数据块而言,有下面的IOT表:

SQL> create table iot1
  2  (id number,
  3  name char(2000),
  4  addr char(2000),
  5  salary number,
  6  constraint iot1_pk primary key(id,name)
  7  )
  8  organization index
  9  pctthreshold 50
 10  overflow
 11  including addr;

Table created.

SQL> 

 

       那么,如果PCTTHRESHOLD参数指定为50的话,那么索引块中至多会使用4K的空间来存放主键字段和非主键字段,其它字段则存放到OVERFLOW段中,而会忽略此时的INCLUDING选项,并非将addr字段随主键字段一起存放,而是将其存放到溢出段中。

       关于IOT表的学习,先到这个地方,以后有新的关于IOT表的知识,再补充在这里!

Oracle索引组织表学习系列二

4 索引组织表(IOT Index-Organized Table)适用什么场景

       在上一篇文章里,提到了IOT表的基本概念和创建的一些注意事项,接下来讨论一下关于IOT表和Heap使用的一个测试对比,用以说明IOT表适用于什么场景。

1 构建一个主表emp表:

[oracle@oracle11g arch_new]$ sqlplus 

SQL*Plus: Release 11.2.0.1.0 Production on Thu Nov 15 10:15:19 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning and Automatic Storage Management options

SQL> conn hr/hr
Connected.
SQL> create table emp
  2  as
  3  select object_id empno,
  4  object_name ename,
  5  created hiredate,
  6  owner job
  7  from all_objects
  8  ;

Table created.

SQL> select count(*) from emp;

  COUNT(*)
----------
     55636

SQL> alter table emp add constraint emp_pk primary key(empno);

Table altered.

SQL> exec dbms_stats.gather_table_stats(user,'EMP',cascade=>true);

PL/SQL procedure successfully completed.

SQL> 

 

2  构建一张子表,子表分别通过Heap和IOT两种方式来实现:

构建Heap类型子表

SQL> create table heap_addresses
  2  ( empno references emp(empno) on delete cascade,
  3  addr_type varchar2(10),
  4  street varchar2(20),
  5  city varchar2(20),
  6  state varchar2(2),
  7  zip number,
  8  primary key (empno,addr_type)
  9  );

Table created.

SQL> 

构建IOT类型子表

SQL> create table iot_addresses
  2  ( empno references emp(empno) on delete cascade,
  3  addr_type varchar2(10),
  4  street varchar2(20),
  5  city varchar2(20),
  6  state varchar2(2),
  7  zip number,
  8  primary key (empno,addr_type)
  9  )
 10  ORGANIZATION INDEX
 11  ;

Table created.

SQL> 

3  分别向两种子表里构造数据,插入WORK,HOME,ORIGINAL,SCHOOL地址:

Heap类型子表插入

SQL> insert into heap_addresses
  2  select empno, 'WORK', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into heap_addresses
  2  select empno, 'HOME', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into heap_addresses
  2  select empno, 'ORIGINAL', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into heap_addresses
  2  select empno, 'SCHOOL', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> select count(*) from heap_addresses;

  COUNT(*)
----------
    222544

SQL> 

IOT类型子表插入

SQL> insert into iot_addresses
  2  select empno, 'WORK', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into iot_addresses
  2  select empno, 'HOME', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into iot_addresses
  2  select empno, 'ORIGINAL', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> insert into iot_addresses
  2  select empno, 'SCHOOL', '123 main street', 'Washington', 'DC', 20123
  3  from emp;

55636 rows created.

SQL> select count(*) from iot_addresses;

  COUNT(*)
----------
    222544

SQL> 

4 分别收集两种子表的统计信息

SQL> exec dbms_stats.gather_table_stats(user,'HEAP_ADDRESSES');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(user,'IOT_ADDRESSES');

PL/SQL procedure successfully completed.

SQL> 

5 对比使用不同类型的子表,SQL的执行计划

SQL> set line 200
SQL> set autot trace
SQL> select *
  2  from emp, heap_addresses
  3  where emp.empno = heap_addresses.empno
  4  and emp.empno = 100;


Execution Plan
----------------------------------------------------------
Plan hash value: 541875893

-----------------------------------------------------------------------------------------------
| Id  | Operation                    | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                |     4 |   368 |     8   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |                |     4 |   368 |     8   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| EMP            |     1 |    45 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | EMP_PK         |     1 |       |     1   (0)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID| HEAP_ADDRESSES |     4 |   188 |     6   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN          | SYS_C00138141  |     4 |       |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("EMP"."EMPNO"=100)
   5 - access("HEAP_ADDRESSES"."EMPNO"=100)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         11  consistent gets
          0  physical reads
          0  redo size
       1351  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          4  rows processed

SQL> 
SQL> select *
  2  from emp, iot_addresses
  3  WHERE emp.empno = iot_addresses.empno
  4  and emp.empno = 100;


Execution Plan
----------------------------------------------------------
Plan hash value: 1475200359

---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                    |     4 |   368 |     4   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |                    |     4 |   368 |     4   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| EMP                |     1 |    45 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | EMP_PK             |     1 |       |     1   (0)| 00:00:01 |
|*  4 |   INDEX RANGE SCAN           | SYS_IOT_TOP_184800 |     4 |   188 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("EMP"."EMPNO"=100)
   4 - access("IOT_ADDRESSES"."EMPNO"=100)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
       1351  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          4  rows processed

SQL> 

6 小结:通过分析上述2条“几乎”完全相同的SQL语句的执行计划,我们可以看到:

        对于第1个SQL(子表通过Heap来实现),首先通过EMP_PK这个主键来访问主表EMP,得到主表的行记录,然后通过得到的主键EMPNO字段来访问子表HEAP_ADDRESS,通过子表HEAP_ADDRESS的索引SYS_C00138141来获取子表记录。整个SQL耗费了11次内存读,COST为8。

        而对于第2个SQL(子表通过IOT来实现),首先通过EMP_PK这个主键来访问主表EMP,得到主表的行记录,然后直接通过EMPNO字段来访问子表IOT_ADDRESS,而EMPNO字段同时也是子表IOT_ADDRESS的主键字段,这样通过IOT表的主键字段来访问数据就非常快了。整个SQL耗费8次内存读,COST为4,是第1条SQL的1/2。

IOT表适用的场景:

① 应用中有完全由主键字段构成的表;

②  应用中有只会通过主键来访问数据的表;

③  希望数据以某种特定顺序存储的表;

IOT表不适用于使用非主键字段来查询数据。

Oracle索引组织表学习系列一

1 索引组织表(IOT Index-Organized Table)概述      
 

       在Oracle数据库中,有一类表被称之为索引组织表,即IOT(Index-Organized Table)。顾名思义,所谓的索引组织表,表面上看是一种表,实质上其数据是以索引的形式来存放的,也就是说IOT表不会占用表段,其占用的是索引段。

       相比较传统的堆表( heap-organized table,即常见的普通表)而言,IOT表的数据是以已经按主键字段排好序后存放在B-tree索引中的,而堆表的数据则是无序的存放在表段的数据块中。此外,在IOT的索引叶子节点块中,既存放主键字段数据,也存放非主键字段的值。

        一张IOT表同Heap表的对比示意图:

2 索引组织表(IOT Index-Organized Table)特征

 

       通过IOT表的主键字段来访问数据可以快速完成,因为IOT表的数据全部存放在B-Tree索引上,只需定位到索引上的数据即可,而无需再去像访问heap表那样进一步通过索引去定位表段上的数据;对IOT表执行DML操作,只会影响到B-Tree索引;

      通过IOT表的主键字段快速范围访问数据很快,因为记录已经是事先按主键排好序的;

      IOT表可以有效的降低存储开销,因为主键字段的数据只是存放在B-Tree索引上,并没有像heap那样,主键字段数据既存放在表段上,也存放在索引上;

      IOT表除了像Heap表那样可以支持约束、触发器、LOB字段、对象字段、分区、并行操作、在线重定义、复制操作等,还支持主键字段压缩、提供溢出存储段(Overflow storage area )、二次索引(Secondary indexes, including bitmap indexes)

 

3 创建索引组织表(IOT Index-Organized Table)

 

   创建IOT表时,必须包含下列2个从句:

        1 ORGANIZATION INDEX,用来标识该表是IOT表;

        2 在建表的同时要指定主键约束,可以是单字段主键,也可以是复合主键约束。

    创建IOT表时,也可以同时指定下列3个从句:

        1 OVERFLOW从句,用于标识非主键字段存放在独立的溢出存储段数据区

        2 PCTTHRESHOLD value,如果指定了溢出存储段的话,该值用于限定可以存放在索引数据块中的最大数据的百分比,即如果IOT表中的行记录超过该值的话,剩余的字段就存放在溢出存储段数据区。也就是说,IOT表中的一条记录有可能被拆分成两部分:头数据区(Head Piece)和尾数据区(Tail Piece)。将主键字段和不超过PCTTHRESHOLD限定的其它非主键字段存放在头数据区,而将其它的非主键字段存放在尾数据区。因此,此时的IOT表的索引记录存放的数据就成了主键字段+满足PCTTHRESHOLD限定的其它非主键字段+指针,指针指向剩余非主键字段存放的地址;PCTTHRESHOLD的取值范围是1-50,默认值是50;

        3 INCLUDING从句,用于显示声明哪些非主键字段可以和主键字段一起存放在索引数据块中。这样,剩下的非主键字段就会存放到独立的溢出存储段数据区。

     创建IOT表的一个示例:

OS版本:

[oracle@oracle11g ~]$ uname -rm
2.6.18-194.el5 x86_64
[oracle@oracle11g ~]$

Oracle版本:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> 

建表:

SQL> conn hr/hr
Connected.
SQL> CREATE TABLE admin_docindex(
  2  token char(20), 
  3  doc_id NUMBER,
  4  token_frequency NUMBER,
  5  token_offsets VARCHAR2(2000),
  6  CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
  7  ORGANIZATION INDEX 
  8  TABLESPACE admin_tbs
  9  PCTTHRESHOLD 20
 10  OVERFLOW TABLESPACE admin_tbs2;

Table created.

SQL> insert into ADMIN_DOCINDEX values('oracleonlinux.cn',1,1,'test iot table');

1 row created.

SQL> commit;

Commit complete.

SQL> 

查看该表的信息,可以看到该表不属于任何表空间,因为没有数据段:

SQL> select table_name,tablespace_name,iot_name,iot_type from user_tables where table_name='ADMIN_DOCINDEX';

TABLE_NAME                     TABLESPACE_NAME                IOT_NAME                       IOT_TYPE
------------------------------ ------------------------------ ------------------------------ ------------
ADMIN_DOCINDEX                                                                               IOT

SQL> select segment_name,segment_type,tablespace_name from user_segments where SEGMENT_NAME='ADMIN_DOCINDEX';

no rows selected

SQL> 

查看索引的信息,我们能看到索引反而占有表空间:

SQL> select index_name,index_type,table_name,tablespace_name,table_type ,PCT_THRESHOLD from user_indexes where table_name='ADMIN_DOCINDEX';

INDEX_NAME                     INDEX_TYPE                  TABLE_NAME                     TABLESPACE_NAME                TABLE_TYPE  PCT_THRESHOLD
------------------------------ --------------------------- ------------------------------ ------------------------------ ----------- -------------
PK_ADMIN_DOCINDEX              IOT - TOP                   ADMIN_DOCINDEX                 ADMIN_TBS                      TABLE                  20

SQL> select segment_name,segment_type,tablespace_name from user_segments where SEGMENT_NAME='PK_ADMIN_DOCINDEX';

SEGMENT_NAME                                                                      SEGMENT_TYPE       TABLESPACE_NAME
--------------------------------------------------------------------------------- ------------------ ------------------------------
PK_ADMIN_DOCINDEX                                                                 INDEX              ADMIN_TBS

SQL> 

       最后,关于IOT表的学习下篇继续。