PostgreSQL主从同步配置,切换步骤

零 说明

本文档详细记录如何一步一步在两台Linux服务器上,搭建PostgreSQL数据库主从模式的高可用数据库环境,以及如何手工切换主备库,及其注意事项。

为了实现数据库对于应用层的透明,配合浮动IP来实现,即使数据库执行主备切换,应用层依然通过浮动IP来访问数据库。即:后端数据库执行主备切换前后,应用系统无需做任何更改。

搭建环境选择在CentOS 7+PostgreSQL 9.4.10上。

基本思路和流程大致如下:

1 主库真实IP为172.16.11.148,配置浮动IP为172.16.11.121,主机名为pgprimary,在其上安装并初始化PostgreSQL数据库;

2 备库真实IP为172.16.11.149,配置浮动IP为172.16.11.121,主机名为pgstandby,在其上只安装PostgreSQL数据库软件;

3 通常情况下,浮动IP 172.16.11.121运行在主库上,当主库出故障时,手工执行数据库的主备切换,备库成为新主库,然后新主库上启用浮动IP;

4 修复并重配原主库,使其成为新主库的备库;

一 环境

两台数据库服务器的主机名、文件系统、OS level、网络配置分别如下:

1主库:

[root@pgprimary ~]# hostname
pgprimary
[root@pgprimary ~]# df -Th
文件系统                类型      容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root xfs        47G  1.1G   46G    3% /
devtmpfs                devtmpfs  908M     0  908M    0% /dev
tmpfs                   tmpfs     920M     0  920M    0% /dev/shm
tmpfs                   tmpfs     920M  8.8M  911M    1% /run
tmpfs                   tmpfs     920M     0  920M    0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  142M  873M   14% /boot
tmpfs                   tmpfs     184M     0  184M    0% /run/user/0
[root@pgprimary ~]# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core) 
[root@pgprimary ~]# uname -rm
3.10.0-862.el7.x86_64 x86_64
[root@pgprimary ~]# ifconfig 
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.148  netmask 255.255.255.0  broadcast 172.16.11.255
        inet6 fe80::58b2:50cd:9d7c:8b23  prefixlen 64  scopeid 0x20
        ether 00:50:56:80:64:27  txqueuelen 1000  (Ethernet)
        RX packets 21338  bytes 4481980 (4.2 MiB)
        RX errors 0  dropped 2378  overruns 0  frame 0
        TX packets 270  bytes 31662 (30.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pgprimary ~]#

2 备库:

[root@pgstandby ~]# hostname
pgstandby
[root@pgstandby ~]# df -Th
文件系统                类型      容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root xfs        47G  1.1G   46G    3% /
devtmpfs                devtmpfs  908M     0  908M    0% /dev
tmpfs                   tmpfs     920M     0  920M    0% /dev/shm
tmpfs                   tmpfs     920M  8.8M  911M    1% /run
tmpfs                   tmpfs     920M     0  920M    0% /sys/fs/cgroup
/dev/sda1               xfs      1014M  142M  873M   14% /boot
tmpfs                   tmpfs     184M     0  184M    0% /run/user/0
[root@pgstandby ~]# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core) 
[root@pgstandby ~]# uname -rm
3.10.0-862.el7.x86_64 x86_64
[root@pgstandby ~]# ifconfig 
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.149  netmask 255.255.255.0  broadcast 172.16.11.255
        inet6 fe80::486a:f6d4:7976:4c9a  prefixlen 64  scopeid 0x20
        ether 00:50:56:80:5f:53  txqueuelen 1000  (Ethernet)
        RX packets 22927  bytes 4822730 (4.5 MiB)
        RX errors 0  dropped 2560  overruns 0  frame 0
        TX packets 262  bytes 28988 (28.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pgstandby ~]#

二 配置浮动IP

步骤3中的监控并启动浮动IP是可选项配置,不适用于本文的数据库主备切换。数据库主备切换还是推荐手工在主备库上启停浮动IP。

1 主库配置浮动IP:

a 拷贝网络配置文件,并修改:

[root@pgprimary ~]# cp /etc/sysconfig/network-scripts/ifcfg-ens160 /etc/sysconfig/network-scripts/ifcfg-ens160:1
[root@pgprimary ~]#

b 修改DEVICE为ens160:1,添加NM_CONTROLLED设置为no,修改IPADDR为浮动IP地址172.16.11.121,并且去掉网关信息,修改完如下:

[root@pgprimary ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens160:1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens160
UUID=be70afb7-a578-4dc0-b694-af530586a76e
DEVICE=ens160:1
ONBOOT=yes
IPADDR=172.16.11.121
NETMASK=255.255.255.0
NM_CONTROLLED=no
[root@pgprimary ~]#

c 启用ens160:1网络设备,并测试:

[root@pgprimary ~]# ifup ens160:1
[root@pgprimary ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:80:64:27 brd ff:ff:ff:ff:ff:ff
    inet 172.16.11.148/24 brd 172.16.11.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet 172.16.11.121/24 brd 172.16.11.255 scope global secondary ens160:1
       valid_lft forever preferred_lft forever
    inet6 fe80::58b2:50cd:9d7c:8b23/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@pgprimary ~]# ifconfig 
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.148  netmask 255.255.255.0  broadcast 172.16.11.255
        inet6 fe80::58b2:50cd:9d7c:8b23  prefixlen 64  scopeid 0x20
        ether 00:50:56:80:64:27  txqueuelen 1000  (Ethernet)
        RX packets 172902  bytes 59418679 (56.6 MiB)
        RX errors 0  dropped 15626  overruns 0  frame 0
        TX packets 15516  bytes 1443358 (1.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.121  netmask 255.255.255.0  broadcast 172.16.11.255
        ether 00:50:56:80:64:27  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pgprimary ~]#

d 客户端去ping floating IP

$ ping 172.16.11.121
PING 172.16.11.121 (172.16.11.121): 56 data bytes
64 bytes from 172.16.11.121: icmp_seq=0 ttl=63 time=0.653 ms
64 bytes from 172.16.11.121: icmp_seq=1 ttl=63 time=0.766 ms
^C
--- 172.16.11.121 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.653/0.710/0.766/0.057 ms

asher at MacBook-Air-3 in ~
$

2 备库配置浮动IP

具体操作步骤,同上述主库配置浮动IP a,b,c步骤。

测试的时候,需要先在主库停止网络设备ens160:1,然后再在备库启动网络设备ens160:1。

主库停ens160:1

[root@pgprimary ~]# ifdown ens160:1
[root@pgprimary ~]#

备库启动ens160:1

[root@pgstandby ~]# ifup ens160:1
[root@pgstandby ~]# ifconfig 
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.149  netmask 255.255.255.0  broadcast 172.16.11.255
        inet6 fe80::486a:f6d4:7976:4c9a  prefixlen 64  scopeid 0x20
        ether 00:50:56:80:5f:53  txqueuelen 1000  (Ethernet)
        RX packets 182284  bytes 61612908 (58.7 MiB)
        RX errors 0  dropped 16880  overruns 0  frame 0
        TX packets 15365  bytes 1381833 (1.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.121  netmask 255.255.255.0  broadcast 172.16.11.255
        ether 00:50:56:80:5f:53  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pgstandby ~]#

至此,完成了对主备库的浮动IP配置。

3 监控并启停浮动IP【可选项】

a 主库监控并启动浮动IP

监控脚本为:

[root@pgprimary ~]# cat monitor_floating_ip.sh 
#!/bin/bash
#This script is used for monitoring the floating IP(172.16.11.121),if it is unreachable,
#then ifup the ens160:1 device on this postgres PRIMARY server.
#Created by 黄伟, whuang@onlyou.com,20181010.

FLOATING_IP=172.16.11.121

c=$(/usr/bin/ping -$FLOATING_IP -c 1|grep Unreachable|wc -l)
if [ $c -gt 0 ]
then 
   /usr/sbin/ifup ens160:1
fi
[root@pgprimary ~]#

crontab定时任务为:

[root@pgprimary ~]# crontab -l
* * * * * /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 10; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 20; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 30; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 40; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 50; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
[root@pgprimary ~]#

说明,脚本用于ping浮动IP,如果ping不通,则在主库上启动浮动IP。定时任务表示每隔10秒调用一次监控浮动IP脚本。

b 备库监控并启动浮动IP

监控脚本为:

[root@pgstandby ~]# cat monitor_floating_ip.sh 
#!/bin/bash
#This script is used for monitoring the floating IP(172.16.11.121),if it is unreachable,and also the PRIMARY
#IP(172.16.11.148) unreachable,then ifup the ens160:1 device on this postgres STANDBY server.
#Created by 黄伟, whuang@onlyou.com,20181010.

PRIMARY_IP=172.16.11.148
FLOATING_IP=172.16.11.121
c1=$(/usr/bin/ping $PRIMARY_IP -c 1|grep Unreachable|wc -l)
c2=$(/usr/bin/ping $FLOATING_IP -c 1|grep Unreachable|wc -l)
c3=$(/usr/sbin/ip addr|grep ens160:1)
if [ $c1 -gt 0 -a $c2 -gt 0 ]
then
   /usr/sbin/ifup ens160:1
elif [ $c1 -eq 0 -a $c2 -eq 0 -a "$c3" ]
then
   /usr/sbin/ifdown ens160:1
fi
[root@pgstandby ~]#

crontab定时任务为:

[root@pgstandby ~]# crontab -l
* * * * * /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 10; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 20; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 30; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 40; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
* * * * * sleep 50; /root/monitor_floating_ip.sh>>/root/monitor_floating_ip.log
[root@pgstandby ~]#

说明,脚本用于ping浮动IP,如果ping不通,且同时ping不通主库实际IP地址,那么意味着主库服务器已经宕机或者整个网络异常,则在备库上启动浮动IP。如果ping主、备库IP正常,且备库上浮动IP正常,则意味着主库服务器恢复正常或网络恢复正常,则在备库上停止浮动IP,把浮动IP交给主库托管。定时任务表示每隔10秒调用一次监控浮动IP脚本。

三 主库安装、配置数据库

1 创建用户、路径

[root@pgprimary ~]# groupadd postgres
[root@pgprimary ~]# useradd -g postgres postgres
[root@pgprimary ~]# passwd postgres
更改用户 postgres 的密码 。
新的 密码:
无效的密码: 密码包含用户名在某些地方
重新输入新的 密码:
passwd:所有的身份验证令牌已经成功更新。
[root@pgprimary ~]# mkdir -p /postgres/9.4.10
[root@pgprimary ~]# chown -R postgres:postgres /postgres/
[root@pgprimary ~]#

2 操作系统安装软件包gcc,zlib,readline,tar

[root@pgprimary ~]# yum install gcc*
具体略
[root@pgprimary ~]# yum install -y zlib*
具体略
[root@pgprimary ~]# yum install -y readline*
具体略
[root@pgprimary ~]# yum install -y tar*
具体略

3 下载软件,postgres用户执行:

[postgres@pgprimary ~]$ wget https://ftp.postgresql.org/pub/source/v9.4.10/postgresql-9.4.10.tar.gz
--2018-09-30 17:19:51--  https://ftp.postgresql.org/pub/source/v9.4.10/postgresql-9.4.10.tar.gz
正在解析主机 ftp.postgresql.org (ftp.postgresql.org)... 204.145.124.244, 217.196.149.55, 174.143.35.246, ...
正在连接 ftp.postgresql.org (ftp.postgresql.org)|204.145.124.244|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:23333040 (22M) [application/x-gzip]
正在保存至: “postgresql-9.4.10.tar.gz”

100%[====================================================================================================================================================================================>] 23,333,040  1.32MB/s 用时 20s    

2018-09-30 17:20:14 (1.10 MB/s) - 已保存 “postgresql-9.4.10.tar.gz” [23333040/23333040])

[postgres@pgprimary ~]$ ll
总用量 22788
-rw-rw-r--. 1 postgres postgres 23333040 10月 25 2016 postgresql-9.4.10.tar.gz
[postgres@pgprimary ~]$

4 解压软件,configure,postgres用户执行:

[postgres@pgprimary ~]$ tar -zxvf postgresql-9.4.10.tar.gz 
...
postgresql-9.4.10/configure.in
postgresql-9.4.10/INSTALL
[postgres@pgprimary ~]$ cd postgresql-9.4.10 && ll
总用量 620
-rw-r--r--.  1 postgres postgres    385 10月 25 2016 aclocal.m4
drwxrwxr-x.  2 postgres postgres   4096 10月 25 2016 config
-rwxr-xr-x.  1 postgres postgres 448579 10月 25 2016 configure
-rw-r--r--.  1 postgres postgres  69119 10月 25 2016 configure.in
drwxrwxr-x. 59 postgres postgres   4096 10月 25 2016 contrib
-rw-r--r--.  1 postgres postgres   1192 10月 25 2016 COPYRIGHT
drwxrwxr-x.  3 postgres postgres    107 10月 25 2016 doc
-rw-r--r--.  1 postgres postgres   3620 10月 25 2016 GNUmakefile.in
-rw-r--r--.  1 postgres postgres    283 10月 25 2016 HISTORY
-rw-r--r--.  1 postgres postgres  75420 10月 25 2016 INSTALL
-rw-r--r--.  1 postgres postgres   1489 10月 25 2016 Makefile
-rw-r--r--.  1 postgres postgres   1209 10月 25 2016 README
drwxrwxr-x. 15 postgres postgres   4096 10月 25 2016 src
[postgres@pgprimary postgresql-9.4.10]$ ./configure --prefix=/postgres/9.4.10/
...
config.status: linking src/makefiles/Makefile.linux to src/Makefile.port
[postgres@pgprimary postgresql-9.4.10]$

5 gmake world,postgres用户执行:

[postgres@pgprimary postgresql-9.4.10]$ gmake world
..
gmake[1]: 离开目录“/home/postgres/postgresql-9.4.10/contrib”
PostgreSQL, contrib, and documentation successfully made. Ready to install.
[postgres@pgprimary postgresql-9.4.10]$

6 gmake install-world,postgres用户执行:

[postgres@pgprimary postgresql-9.4.10]$ gmake install-world
...
PostgreSQL, contrib, and documentation installation complete.
[postgres@pgprimary postgresql-9.4.10]$

7 初始化数据库,postgres用户执行:

[postgres@pgprimary ~]$ /postgres/9.4.10/bin/initdb -D /postgres/9.4.10/data
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "zh_CN.UTF-8".
The default database encoding has accordingly been set to "UTF8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".

Data page checksums are disabled.

creating directory /postgres/9.4.10/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /postgres/9.4.10/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /postgres/9.4.10/bin/postgres -D /postgres/9.4.10/data
or
    /postgres/9.4.10/bin/pg_ctl -D /postgres/9.4.10/data -l logfile start

[postgres@pgprimary ~]$

8 修改postgres用户环境变量

[postgres@pgprimary ~]$ cat ~/.bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin:/postgres/9.4.10/bin
PGDATA=/postgres/9.4.10/data
export PATH
export PGDATA
[postgres@pgprimary ~]$

postgres用户的/home/.bash_profile修改之后的信息如下:

9 修改数据库参数

修改/postgres/9.4.10/data/postgresql.conf文件,使其:

listen_addresses = '*'
port = 5432

其它参数暂时不动。
修改/postgres/9.4.10/data/pg_hba.conf,在其中添加下述;

host    all             all             0.0.0.0/0               md5

其它保持不变。

10 启动数据库

[postgres@pgprimary ~]$ pg_ctl start
server starting
[postgres@pgprimary ~]$ LOG:  database system was shut down at 2018-10-10 09:32:14 CST
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

[postgres@pgprimary ~]$

11 测试数据库

a 备库停用浮动IP

[root@pgstandby ~]# ifdown ens160:1
[root@pgstandby ~]# ifconfig 
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.149  netmask 255.255.255.0  broadcast 172.16.11.255
        inet6 fe80::486a:f6d4:7976:4c9a  prefixlen 64  scopeid 0x20
        ether 00:50:56:80:5f:53  txqueuelen 1000  (Ethernet)
        RX packets 14738337  bytes 3468772296 (3.2 GiB)
        RX errors 0  dropped 1833331  overruns 0  frame 0
        TX packets 149019  bytes 13182295 (12.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1  bytes 104 (104.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1  bytes 104 (104.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pgstandby ~]#

b 主库启用浮动IP

[root@pgprimary ~]# ifup  ens160:1
[root@pgprimary ~]# ifconfig 
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.148  netmask 255.255.255.0  broadcast 172.16.11.255
        inet6 fe80::58b2:50cd:9d7c:8b23  prefixlen 64  scopeid 0x20
        ether 00:50:56:80:64:27  txqueuelen 1000  (Ethernet)
        RX packets 14551224  bytes 3431186752 (3.1 GiB)
        RX errors 0  dropped 1817255  overruns 0  frame 0
        TX packets 108201  bytes 9678459 (9.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.121  netmask 255.255.255.0  broadcast 172.16.11.255
        ether 00:50:56:80:64:27  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 12  bytes 1089 (1.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12  bytes 1089 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pgprimary ~]#

c 连接主库原来IP

MacBook-Air-3:~ postgres$ psql -h 172.16.11.148 -p 5432 -d postgres -U postgres
Password for user postgres: 
psql (9.5.2, server 9.4.10)
Type "help" for help.

postgres=# \c
psql (9.5.2, server 9.4.10)
You are now connected to database "postgres" as user "postgres".
postgres=#

d 连接主库浮动IP

MacBook-Air-3:~ postgres$ psql -h 172.16.11.121 -p 5432 -d postgres -U postgres
Password for user postgres: 
psql (9.5.2, server 9.4.10)
Type "help" for help.

postgres=# \c
psql (9.5.2, server 9.4.10)
You are now connected to database "postgres" as user "postgres".
postgres=#

从上,可以看到,当主库启用浮动IP之后,客户端既可以通过原IP访问主库,也可以通过浮动IP访问主库。

四 备库安装数据库软件

具体步骤参照步骤三的1-6和步骤8。其它不需要做更多的配置,需要注意的是,该备库只需要安装数据库软件即可,并不需要初始化数据库。

五 配置主备库同步

1 主库添加同步用户

[root@pgprimary ~]# su - postgres
上一次登录:三 10月 10 10:08:01 CST 2018pts/0 上
[postgres@pgprimary ~]$ psql
psql (9.4.10)
Type "help" for help.

postgres=# CREATE ROLE replica login replication encrypted password 'replica';
CREATE ROLE
postgres=# \q
[postgres@pgprimary ~]$

2 主库修改配置文件pg_hba.conf和postgresql.conf

修改/postgres/9.4.10/data/pg_hba.conf,添加下述行:

host    replication     replica     172.16.11.149/32           md5

修改/postgres/9.4.10/data/postgresql.conf,修改下述行:

wal_level = hot_standby  # 这个是设置主为wal的主机
max_wal_senders = 4 # 这个设置了可以最多有几个流复制连接,差不多有几个从,就设置几个
wal_keep_segments = 256 # 设置流复制保留的最多的xlog数目
wal_sender_timeout = 60s # 设置流复制主机发送数据的超时时间
max_connections = 100 # 这个设置要注意下,主库的该参数要根据实际情况设置。从库的max_connections必须要大于主库的该参数

3 主库重启

[postgres@pgprimary ~]$ pg_ctl restart -m fast
waiting for server to shut down...LOG:  received fast shutdown request
.LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
FATAL:  terminating connection due to administrator command
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped
server starting
[postgres@pgprimary ~]$ LOG:  database system was shut down at 2018-10-10 16:35:19 CST
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

[postgres@pgprimary ~]$

4 备库,执行主库的基础备份

[root@pgstandby ~]# su - postgres
上一次登录:三 10月 10 09:43:15 CST 2018pts/0 上
[postgres@pgstandby ~]$ ll /postgres/9.4.10/
总用量 16
drwxrwxr-x. 2 postgres postgres 4096 9月  30 17:30 bin
drwxrwxr-x. 6 postgres postgres 4096 9月  30 17:30 include
drwxrwxr-x. 4 postgres postgres 4096 9月  30 17:30 lib
drwxrwxr-x. 8 postgres postgres 4096 9月  30 17:30 share
[postgres@pgstandby ~]$ which pg_basebackup 
/postgres/9.4.10/bin/pg_basebackup
[postgres@pgstandby ~]$ pg_basebackup -F p --progress -D /postgres/9.4.10/data -h 172.16.11.121 -p 5432 -U replica --password -X stream
Password: 
21122/21122 kB (100%), 1/1 tablespace
[postgres@pgstandby ~]$ ll /postgres/9.4.10/
总用量 20
drwxrwxr-x.  2 postgres postgres 4096 9月  30 17:30 bin
drwx------. 18 postgres postgres 4096 10月 10 16:38 data
drwxrwxr-x.  6 postgres postgres 4096 9月  30 17:30 include
drwxrwxr-x.  4 postgres postgres 4096 9月  30 17:30 lib
drwxrwxr-x.  8 postgres postgres 4096 9月  30 17:30 share
[postgres@pgstandby ~]$ du -sh /postgres/9.4.10/data/
37M     /postgres/9.4.10/data/
[postgres@pgstandby ~]$

可以看到,备库已经生成/postgres/9.4.10/data/。

5 备库,配置recovery.conf文件

从安装文件里cp过来。然后编辑,更改为下面3个参数,其它保持不变。

[postgres@pgstandby ~]$ cp /home/postgres/postgresql-9.4.10/src/backend/access/transam/recovery.conf.sample /postgres/9.4.10/data/recovery.conf
[postgres@pgstandby ~]$ vi /postgres/9.4.10/data/recovery.conf 
...
standby_mode = on
primary_conninfo = 'host=172.16.11.148 port=5432 user=replica password=replica'         # e.g. 'host=localhost port=5432'
recovery_target_timeline = 'latest'
...

6 备库,修改配置文件postgresql.conf

修改下列4个参数即可:

[postgres@pgstandby ~]$ vi /postgres/9.4.10/data/postgresql.conf 
...
hot_standby = on  # 说明这台机器不仅仅是用于数据归档,也用于数据查询
max_standby_streaming_delay = 30s # 数据流备份的最大延迟时间
wal_receiver_status_interval = 1s  # 多久向主报告一次从的状态,当然从每次数据复制都会向主报告状态,这里只是设置最长的间隔时间
hot_standby_feedback = on # 如果有错误的数据复制,是否向主进行反馈
...

7 备库,启动

[postgres@pgstandby ~]$ pg_ctl status
pg_ctl: no server running
[postgres@pgstandby ~]$ pg_ctl start
server starting
[postgres@pgstandby ~]$ LOG:  database system was interrupted; last known up at 2018-10-10 16:38:47 CST
LOG:  entering standby mode
LOG:  redo starts at 0/2000028
LOG:  consistent recovery state reached at 0/2000128
LOG:  database system is ready to accept read only connections
LOG:  started streaming WAL from primary at 0/3000000 on timeline 1

[postgres@pgstandby ~]$ pg_ctl status
pg_ctl: server is running (PID: 28027)
/postgres/9.4.10/bin/postgres
[postgres@pgstandby ~]$ ps -ef|grep postgres
root     27400 25585  0 16:36 pts/0    00:00:00 su - postgres
postgres 27401 27400  0 16:36 pts/0    00:00:00 -bash
postgres 28027     1  0 16:48 pts/0    00:00:00 /postgres/9.4.10/bin/postgres
postgres 28028 28027  0 16:48 ?        00:00:00 postgres: startup process   recovering 000000010000000000000003
postgres 28029 28027  0 16:48 ?        00:00:00 postgres: checkpointer process  
postgres 28030 28027  0 16:48 ?        00:00:00 postgres: writer process   
postgres 28031 28027  0 16:48 ?        00:00:00 postgres: stats collector process  
postgres 28032 28027  0 16:48 ?        00:00:00 postgres: wal receiver process   streaming 0/3000210
postgres 28073 27401  0 16:49 pts/0    00:00:00 ps -ef
postgres 28074 27401  0 16:49 pts/0    00:00:00 grep --color=auto postgres
[postgres@pgstandby ~]$

从上,可以看到,备库的receiver进程,在接收主库的日志信息。

8 验证主备库同步

主库执行:

[postgres@pgprimary ~]$ ps -ef|grep postgres
root     20280 17302  0 16:30 pts/0    00:00:00 su - postgres
postgres 20281 20280  0 16:30 pts/0    00:00:00 -bash
postgres 20677     1  0 16:35 pts/0    00:00:00 /postgres/9.4.10/bin/postgres
postgres 20679 20677  0 16:35 ?        00:00:00 postgres: checkpointer process  
postgres 20680 20677  0 16:35 ?        00:00:00 postgres: writer process   
postgres 20681 20677  0 16:35 ?        00:00:00 postgres: wal writer process  
postgres 20682 20677  0 16:35 ?        00:00:00 postgres: autovacuum launcher process  
postgres 20683 20677  0 16:35 ?        00:00:00 postgres: stats collector process  
postgres 21656 20677  0 16:48 ?        00:00:00 postgres: wal sender process replica 172.16.11.149(51868) streaming 0/3000210
postgres 21939 20281  0 16:52 pts/0    00:00:00 ps -ef
postgres 21940 20281  0 16:52 pts/0    00:00:00 grep --color=auto postgres
[postgres@pgprimary ~]$ psql
psql (9.4.10)
Type "help" for help.

postgres=# \x
Expanded display is on.
postgres=# select * from pg_stat_replication;
-[ RECORD 1 ]----+------------------------------
pid              | 21656
usesysid         | 16385
usename          | replica
application_name | walreceiver
client_addr      | 172.16.11.149
client_hostname  | 
client_port      | 51868
backend_start    | 2018-10-10 16:48:44.010195+08
backend_xmin     | 1893
state            | streaming
sent_location    | 0/3000210
write_location   | 0/3000210
flush_location   | 0/3000210
replay_location  | 0/3000210
sync_priority    | 0
sync_state       | async

postgres=#

从上,看到主库有sender进程,发送同步信息给备库。同时,登录主库也可以查看到同步信息。

 

六 主备库切换

这里,模拟主库出现故障,先做一次switchover测试,即先切换主备库角色,主备互换。

然后,重建原主库或者直接将原主库当作新主库的备库。

最后,再执行一次switchover,主备互换,即恢复到之前的最初状态,主库还是原主库,备库还是原备库。

1 执行第一次主备切换

当原主库库服务器或者数据库故障时,我们可以先停止主库,然后启动备库。

a 停止主库

[postgres@pgprimary data]$ pg_ctl stop -m fast -l /home/postgres/pg.log 
waiting for server to shut down.... done
server stopped
[postgres@pgprimary data]$

b 启动备库

 [postgres@pgstandby data]$ pwd
/postgres/9.4.10/data
[postgres@pgstandby data]$ ll 
总用量 72
-rw-------. 1 postgres postgres   206 10月 10 16:38 backup_label.old
drwx------. 5 postgres postgres    41 10月 10 16:38 base
drwx------. 2 postgres postgres  4096 10月 26 15:57 global
drwx------. 2 postgres postgres    18 10月 10 16:38 pg_clog
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_dynshmem
-rw-------. 1 postgres postgres  4605 10月 16 16:35 pg_hba.conf
-rw-------. 1 postgres postgres  1636 10月 10 16:38 pg_ident.conf
drwx------. 4 postgres postgres    39 10月 10 16:38 pg_logical
drwx------. 4 postgres postgres    36 10月 10 16:38 pg_multixact
drwx------. 2 postgres postgres    18 10月 26 15:57 pg_notify
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_replslot
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_serial
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_snapshots
drwx------. 2 postgres postgres     6 10月 26 15:57 pg_stat
drwx------. 2 postgres postgres     6 10月 26 09:29 pg_stat_tmp
drwx------. 2 postgres postgres    18 10月 10 16:38 pg_subtrans
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_tblspc
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_twophase
-rw-------. 1 postgres postgres     4 10月 10 16:38 PG_VERSION
drwx------. 3 postgres postgres  4096 10月 26 15:57 pg_xlog
-rw-------. 1 postgres postgres    88 10月 10 16:38 postgresql.auto.conf
-rw-------. 1 postgres postgres 20810 10月 10 16:45 postgresql.conf
-rw-------. 1 postgres postgres    30 10月 26 15:57 postmaster.opts
-rw-------. 1 postgres postgres    71 10月 26 15:57 postmaster.pid
-rw-r--r--. 1 postgres postgres  5641 10月 16 16:54 recovery.conf
[postgres@pgstandby data]$ pg_ctl promote -l /home/postgres/pg.log 
server promoting
[postgres@pgstandby data]$ ll
总用量 72
-rw-------. 1 postgres postgres   206 10月 10 16:38 backup_label.old
drwx------. 5 postgres postgres    41 10月 10 16:38 base
drwx------. 2 postgres postgres  4096 10月 26 15:57 global
drwx------. 2 postgres postgres    18 10月 10 16:38 pg_clog
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_dynshmem
-rw-------. 1 postgres postgres  4605 10月 16 16:35 pg_hba.conf
-rw-------. 1 postgres postgres  1636 10月 10 16:38 pg_ident.conf
drwx------. 4 postgres postgres    39 10月 10 16:38 pg_logical
drwx------. 4 postgres postgres    36 10月 10 16:38 pg_multixact
drwx------. 2 postgres postgres    18 10月 26 15:57 pg_notify
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_replslot
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_serial
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_snapshots
drwx------. 2 postgres postgres     6 10月 26 15:57 pg_stat
drwx------. 2 postgres postgres    42 10月 26 16:10 pg_stat_tmp
drwx------. 2 postgres postgres    18 10月 10 16:38 pg_subtrans
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_tblspc
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_twophase
-rw-------. 1 postgres postgres     4 10月 10 16:38 PG_VERSION
drwx------. 3 postgres postgres  4096 10月 26 16:10 pg_xlog
-rw-------. 1 postgres postgres    88 10月 10 16:38 postgresql.auto.conf
-rw-------. 1 postgres postgres 20810 10月 10 16:45 postgresql.conf
-rw-------. 1 postgres postgres    30 10月 26 15:57 postmaster.opts
-rw-------. 1 postgres postgres    71 10月 26 15:57 postmaster.pid
-rw-r--r--. 1 postgres postgres  5641 10月 16 16:54 recovery.done
[postgres@pgstandby data]$

这里,需要注意的是,启动备库的命令是pg_ctl promote,之前备库是出在接受主库同步的状态,需要通过这个命令来将备库提升到primary状态。同时,备库切换为主库之后,/postgres/9.4.10/data/recovery.conf文件自动变为/postgres/9.4.10/data/recovery.done。我个人猜想,主要是为了下次重启该数据库时,直接进入“主库模式”,如果还是recovery.conf的话,意味着该库需要执行recovery操作。

c 备库启动浮动IP

先在主库上停止浮动IP

 [root@pgprimary ~]# ifconfig 
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.148  netmask 255.255.255.0  broadcast 172.16.11.255
        inet6 fe80::58b2:50cd:9d7c:8b23  prefixlen 64  scopeid 0x20
        ether 00:50:56:80:64:27  txqueuelen 1000  (Ethernet)
        RX packets 37310308  bytes 8976173370 (8.3 GiB)
        RX errors 0  dropped 4868601  overruns 0  frame 0
        TX packets 1426780  bytes 127364686 (121.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.121  netmask 255.255.255.0  broadcast 172.16.11.255
        ether 00:50:56:80:64:27  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 134634  bytes 25999628 (24.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 134634  bytes 25999628 (24.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

您在 /var/spool/mail/root 中有邮件
[root@pgprimary ~]# ifdown ens160:1
[root@pgprimary ~]# ifconfig 
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.148  netmask 255.255.255.0  broadcast 172.16.11.255
        inet6 fe80::58b2:50cd:9d7c:8b23  prefixlen 64  scopeid 0x20
        ether 00:50:56:80:64:27  txqueuelen 1000  (Ethernet)
        RX packets 37310444  bytes 8976218410 (8.3 GiB)
        RX errors 0  dropped 4868628  overruns 0  frame 0
        TX packets 1426794  bytes 127368186 (121.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 134634  bytes 25999628 (24.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 134634  bytes 25999628 (24.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pgprimary ~]#

然后,备库启动浮动IP

 [root@pgstandby ~]# ifup ens160:1
您在 /var/spool/mail/root 中有邮件
[root@pgstandby ~]# ifconfig 
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.149  netmask 255.255.255.0  broadcast 172.16.11.255
        inet6 fe80::486a:f6d4:7976:4c9a  prefixlen 64  scopeid 0x20
        ether 00:50:56:80:5f:53  txqueuelen 1000  (Ethernet)
        RX packets 37445369  bytes 8975382297 (8.3 GiB)
        RX errors 0  dropped 4884823  overruns 0  frame 0
        TX packets 1509890  bytes 162194380 (154.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.11.121  netmask 255.255.255.0  broadcast 172.16.11.255
        ether 00:50:56:80:5f:53  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 195774  bytes 35184330 (33.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 195774  bytes 35184330 (33.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pgstandby ~]#

d 测试新主库是否可用

MacBook-Air-3:~ postgres$ psql -h 172.16.11.121 -p 5432 -d postgres -U postgres
Password for user postgres: 
psql (9.5.2, server 9.4.10)
Type "help" for help.

postgres=# select * from t;
 id 
----
  1
(1 row)

postgres=# delete from t;
DELETE 1
postgres=# select * from t;
 id 
----
(0 rows)

postgres=# 

客户端通过浮动IP访问数据库,并删除t表的数据。说明,新主库可用正常使用。同时,记录下,这里是从新主库里删除的数据,接下来,我们重建新备库之后,看看这条被删除的数据,是否也从备库里查询不到?

2 重建主库

这里,为了快速验证和测试主备互换的流程,只是假设主库出现故障,所以,在这里并不需要重建原主库。真实情况下,如果主库因出现故障而执行切换的话,就需要重建。重建的流程和步骤可以参考初始搭建备库的流程来完成。

当然,这里需要将原主库配置为新备库,同样需要参照搭建备库流程五 配置主备库同步中的5,6,7三个步骤。

a 新备库创建配置recovery.conf文件

[postgres@pgprimary ~]$ cp /home/postgres/postgresql-9.4.10/src/backend/access/transam/recovery.conf.sample /postgres/9.4.10/data/recovery.conf
[postgres@pgprimary ~]$ vi /postgres/9.4.10/data/recovery.conf 
...
standby_mode = on
primary_conninfo = 'host=172.16.11.149 port=5432 user=replica password=replica'         # e.g. 'host=localhost port=5432'
recovery_target_timeline = 'latest'
...

需要注意的是,这里主库的信息,需要指向新主库的IP,即原备库的IP 172.16.11.149。

b 新备库修改配置文件postgresql.conf

修改下列4个参数即可:

[postgres@pgprimary ~]$ vi /postgres/9.4.10/data/postgresql.conf 
...
hot_standby = on  # 说明这台机器不仅仅是用于数据归档,也用于数据查询
max_standby_streaming_delay = 30s # 数据流备份的最大延迟时间
wal_receiver_status_interval = 1s  # 多久向主报告一次从的状态,当然从每次数据复制都会向主报告状态,这里只是设置最长的间隔时间
hot_standby_feedback = on # 如果有错误的数据复制,是否向主进行反馈
...

c 启动新备库

[postgres@pgprimary data]$ pg_ctl start -l /home/postgres/pg.log 
server starting
[postgres@pgprimary data]$ ps -ef|grep postgres
root     13481 13458  0 09:22 pts/1    00:00:00 su - postgres
postgres 13482 13481  0 09:22 pts/1    00:00:00 -bash
postgres 14425     1  1 16:55 pts/1    00:00:00 /postgres/9.4.10/bin/postgres
postgres 14426 14425  0 16:55 ?        00:00:00 postgres: startup process   recovering 000000060000000000000005
postgres 14427 14425  0 16:55 ?        00:00:00 postgres: checkpointer process  
postgres 14428 14425  0 16:55 ?        00:00:00 postgres: writer process   
postgres 14429 14425  0 16:55 ?        00:00:00 postgres: stats collector process  
postgres 14430 14425  1 16:55 ?        00:00:00 postgres: wal receiver process   streaming 0/508DDD0
postgres 14431 13482  0 16:55 pts/1    00:00:00 ps -ef
postgres 14432 13482  0 16:55 pts/1    00:00:00 grep --color=auto postgres
[postgres@pgprimary data]$

启动之后,发现新备库上开始有wal receiver process 。

同时,登录新备库,查看t表的数据有没有同步:

[postgres@pgprimary data]$ psql
psql (9.4.10)
Type "help" for help.

postgres=# select * from t;
 id 
----
(0 rows)

postgres=#

显然,之前从新主库里删除的那条数据,在这个新备库里也看不到了。说明,新备库是OK的了。

3 再次执行主备切换

a 停掉新主库(原备库)

 [postgres@pgstandby data]$ ps -ef|grep postgres
postgres 16230     1  0 15:57 pts/1    00:00:00 /postgres/9.4.10/bin/postgres
postgres 16232 16230  0 15:57 ?        00:00:00 postgres: checkpointer process  
postgres 16233 16230  0 15:57 ?        00:00:00 postgres: writer process   
postgres 16234 16230  0 15:57 ?        00:00:00 postgres: stats collector process  
postgres 16998 16230  0 16:10 ?        00:00:00 postgres: wal writer process  
postgres 16999 16230  0 16:10 ?        00:00:00 postgres: autovacuum launcher process  
postgres 17546 16230  0 16:20 ?        00:00:00 postgres: postgres postgres 172.16.135.72(54471) idle
postgres 19262 16230  0 16:55 ?        00:00:00 postgres: wal sender process replica 172.16.11.148(53298) streaming 0/508DEA8
postgres 19520 29607  0 17:00 pts/1    00:00:00 ps -ef
postgres 19521 29607  0 17:00 pts/1    00:00:00 grep --color=auto postgres
root     29606 29605  0 09:23 pts/1    00:00:00 su - postgres
postgres 29607 29606  0 09:23 pts/1    00:00:00 -bash
[postgres@pgstandby data]$ pg_ctl stop -m fast -l /home/postgres/pg.log 
waiting for server to shut down.... done
server stopped
[postgres@pgstandby data]$ ll
总用量 68
-rw-------. 1 postgres postgres   206 10月 10 16:38 backup_label.old
drwx------. 5 postgres postgres    41 10月 10 16:38 base
drwx------. 2 postgres postgres  4096 10月 26 15:57 global
drwx------. 2 postgres postgres    18 10月 10 16:38 pg_clog
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_dynshmem
-rw-------. 1 postgres postgres  4605 10月 16 16:35 pg_hba.conf
-rw-------. 1 postgres postgres  1636 10月 10 16:38 pg_ident.conf
drwx------. 4 postgres postgres    39 10月 10 16:38 pg_logical
drwx------. 4 postgres postgres    36 10月 10 16:38 pg_multixact
drwx------. 2 postgres postgres    18 10月 26 15:57 pg_notify
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_replslot
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_serial
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_snapshots
drwx------. 2 postgres postgres    63 10月 26 17:00 pg_stat
drwx------. 2 postgres postgres     6 10月 26 17:00 pg_stat_tmp
drwx------. 2 postgres postgres    18 10月 10 16:38 pg_subtrans
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_tblspc
drwx------. 2 postgres postgres     6 10月 10 16:38 pg_twophase
-rw-------. 1 postgres postgres     4 10月 10 16:38 PG_VERSION
drwx------. 3 postgres postgres  4096 10月 26 16:10 pg_xlog
-rw-------. 1 postgres postgres    88 10月 10 16:38 postgresql.auto.conf
-rw-------. 1 postgres postgres 20810 10月 10 16:45 postgresql.conf
-rw-------. 1 postgres postgres    30 10月 26 15:57 postmaster.opts
-rw-r--r--. 1 postgres postgres  5641 10月 16 16:54 recovery.done
[postgres@pgstandby data]$

需要注意的是,这里的新主库停止之后,系统同样自动的将/postgres/9.4.10/data/recovery.conf文件变为/postgres/9.4.10/data/recovery.done。在我们重新启动这个备库的时候,需要把recovery.done改名为recovery.conf,使其在重新启动时,自己担任的是“备库角色”,自动同主库进行同步。

b 启动新备库(原主库)

[postgres@pgprimary data]$ ps -ef|grep postgres
root     13481 13458  0 09:22 pts/1    00:00:00 su - postgres
postgres 13482 13481  0 09:22 pts/1    00:00:00 -bash
postgres 14425     1  0 16:55 pts/1    00:00:00 /postgres/9.4.10/bin/postgres
postgres 14426 14425  0 16:55 ?        00:00:00 postgres: startup process   recovering 000000060000000000000005
postgres 14427 14425  0 16:55 ?        00:00:00 postgres: checkpointer process  
postgres 14428 14425  0 16:55 ?        00:00:00 postgres: writer process   
postgres 14429 14425  0 16:55 ?        00:00:00 postgres: stats collector process  
postgres 14894 13482  0 17:01 pts/1    00:00:00 ps -ef
postgres 14895 13482  0 17:01 pts/1    00:00:00 grep --color=auto postgres
[postgres@pgprimary data]$ pg_ctl promote -l /home/postgres/
server promoting
[postgres@pgprimary data]$ ps -ef|grep postgres
root     13481 13458  0 09:22 pts/1    00:00:00 su - postgres
postgres 13482 13481  0 09:22 pts/1    00:00:00 -bash
postgres 14425     1  0 16:55 pts/1    00:00:00 /postgres/9.4.10/bin/postgres
postgres 14427 14425  0 16:55 ?        00:00:00 postgres: checkpointer process  
postgres 14428 14425  0 16:55 ?        00:00:00 postgres: writer process   
postgres 14429 14425  0 16:55 ?        00:00:00 postgres: stats collector process  
postgres 14916 14425  0 17:01 ?        00:00:00 postgres: wal writer process  
postgres 14917 14425  0 17:01 ?        00:00:00 postgres: autovacuum launcher process  
postgres 14918 13482  0 17:01 pts/1    00:00:00 ps -ef
postgres 14919 13482  0 17:01 pts/1    00:00:00 grep --color=auto postgres
[postgres@pgprimary data]$

此时,这个库又恢复到最初的状态,成为主库。

c 启动原备库

在启动之前,修改recovery.done改名为recovery.conf。

 [postgres@pgstandby data]$ pwd
/postgres/9.4.10/data
[postgres@pgstandby data]$ mv recovery.done recovery.conf
[postgres@pgstandby data]$

启动备库:

 [postgres@pgstandby data]$ pg_ctl start -l /home/postgres/pg.log 
server starting
[postgres@pgstandby data]$ ps -ef|grep postgres
postgres 19871     1  0 17:07 pts/1    00:00:00 /postgres/9.4.10/bin/postgres
postgres 19872 19871  0 17:07 ?        00:00:00 postgres: startup process   recovering 000000070000000000000005
postgres 19873 19871  0 17:07 ?        00:00:00 postgres: checkpointer process  
postgres 19874 19871  0 17:07 ?        00:00:00 postgres: writer process   
postgres 19875 19871  0 17:07 ?        00:00:00 postgres: stats collector process  
postgres 19876 19871  0 17:07 ?        00:00:00 postgres: wal receiver process   streaming 0/508E108
postgres 19881 29607  0 17:07 pts/1    00:00:00 ps -ef
postgres 19882 29607  0 17:07 pts/1    00:00:00 grep --color=auto postgres
root     29606 29605  0 09:23 pts/1    00:00:00 su - postgres
postgres 29607 29606  0 09:23 pts/1    00:00:00 -bash
[postgres@pgstandby data]$

启动之后,看到有wal receiver process。说明,此时备库依然是备库的角色了,并开始同主库进行同步了。

至此,完成了对于主备库的相互切换,并且验证了数据同步,以及注意事项。

七 小结

本文档对于在CentOS7+PostgreSQL9.4.10的数据库环境,利用浮动IP来构建一套数据库的主备同步环境,给出详细搭建和配置的流程。通过自动监控启停浮动IP来实现网络的畅通。正常情况下,主库提供读写操作,备库提供只读操作。客户端和应用程序应该通过浮动IP来访问数据库,而不应该使用真实IP访问数据库。

 

PostgreSQL数据库SQL优化案例:从2秒到2毫秒

零 背景说明

这是生产环境下,一则PostgreSQL数据库SQL优化的案例小结:单条SQL执行时间,从优化前2秒到优化后,降低到2毫秒,性能提升1000倍。数据库版本为PostgreSQL 9.3。原始SQL语句如下:

SELECT t.id,
       t.code,
       t.name,
       t.level,
       t.IS_CLOSED isClosed,
       t.PARENT_ID parentId,
       t.dir,
       t.remark
FROM t_report_type t
WHERE 1=1
  AND t.code IN
    (SELECT report_type
     FROM t_report_temp
     WHERE corp_id IS NULL
       AND partner_id IS NULL)
ORDER BY t.code

优化前执行计划如下:

EXPLAIN ANALYZE
SELECT t.id,
       t.code,
       t.name,
       t.level,
       t.IS_CLOSED isClosed,
       t.PARENT_ID parentId,
       t.dir,
       t.remark
FROM t_report_type t
WHERE 1=1
  AND t.code IN
    (SELECT report_type
     FROM t_report_temp
     WHERE corp_id IS NULL
       AND partner_id IS NULL)
ORDER BY t.code

┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                                       QUERY PLAN                                                                       │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Sort  (cost=3179.42..3179.93 rows=204 width=165) (actual time=2179.224..2179.250 rows=383 loops=1)                                                     │
│   Sort Key: t.code                                                                                                                                     │
│   Sort Method: quicksort  Memory: 126kB                                                                                                                │
│   ->  Nested Loop Semi Join  (cost=0.55..3171.59 rows=204 width=165) (actual time=26.092..2178.526 rows=383 loops=1)                                   │
│         ->  Seq Scan on t_report_type t  (cost=0.00..16.59 rows=459 width=165) (actual time=0.002..0.187 rows=459 loops=1)                             │
│         ->  Index Only Scan using t_report_temp_01 on t_report_temp  (cost=0.55..1873.80 rows=428 width=7) (actual time=4.744..4.744 rows=1 loops=459) │
│               Index Cond: ((corp_id IS NULL) AND (partner_id IS NULL) AND (report_type = (t.code)::text))                                              │
│               Heap Fetches: 53                                                                                                                         │
│ Total runtime: 2179.301 ms                                                                                                                             │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
(9 rows)

一 分析问题

结合执行来分析SQL语句,查看表结构,数据量分布:

zyd=> \d t_report_type
                 Table "zyd.t_report_type"
┌───────────┬────────────────────────┬─────────────────────┐
│  Column   │          Type          │      Modifiers      │
├───────────┼────────────────────────┼─────────────────────┤
│ id        │ character(32)          │ not null            │
│ code      │ character varying(32)  │ not null            │
│ name      │ character varying(256) │ not null            │
│ parent_id │ character varying(32)  │                     │
│ dir       │ character varying(256) │                     │
│ level     │ smallint               │                     │
│ is_closed │ character(1)           │ default '0'::bpchar │
│ remark    │ character varying(200) │                     │
└───────────┴────────────────────────┴─────────────────────┘
Indexes:
    "t_report_type_pkey" PRIMARY KEY, btree (id)
    "t_report_type_01" btree (code, parent_id, dir)

zyd=> select count(*) from t_report_type;
┌───────┐
│ count │
├───────┤
│   459 │
└───────┘
(1 row)

zyd=> select count(*) from t_report_temp;
┌────────┐
│ count  │
├────────┤
│ 366132 │
└────────┘
(1 row)

zyd=> \d t_report_temp
                 Table "zyd.t_report_temp"
┌─────────────┬────────────────────────────────┬───────────┐
│   Column    │              Type              │ Modifiers │
├─────────────┼────────────────────────────────┼───────────┤
│ id          │ character(32)                  │ not null  │
│ report_code │ character varying(40)          │ not null  │
│ report_name │ character varying(100)         │ not null  │
│ report_type │ character varying(32)          │ not null  │
│ corp_id     │ character(32)                  │           │
│ partner_id  │ character(32)                  │           │
│ industry_id │ character(32)                  │           │
│ is_default  │ character(1)                   │           │
│ remark      │ character varying(200)         │           │
│ source_type │ character varying(255)         │           │
│ is_show     │ character(1)                   │ default 0 │
│ create_ts   │ timestamp(6) without time zone │           │
│ update_ts   │ timestamp(6) without time zone │           │
│ is_simple   │ character(1)                   │           │
└─────────────┴────────────────────────────────┴───────────┘
Indexes:
    "t_report_temp_pkey" PRIMARY KEY, btree (id)
    "t_report_temp_01" btree (corp_id, industry_id, partner_id, report_code, report_type, report_name)
    "t_report_temp_02" btree (industry_id, partner_id, report_type)
    "t_report_temp_03" btree (industry_id, report_type)
    "t_report_temp_04" btree (report_name, corp_id, report_code)
    "t_report_temp_index_1" btree (corp_id, report_code)

zyd=> select count(*) from t_report_temp where corp_id is null and partner_id is null;
┌───────┐
│ count │
├───────┤
│  1126 │
└───────┘
(1 row)

zyd=>

看到,这是一个简单的2张表做join连接的SQL查询语句,优化器选择了嵌套循环nested loop的关联方式,其中一张表t_report_type为小表,数据量为459,优化器选择全表扫描的方式获取数据,另外一张表t_report_temp数据量相对多一些,366132条数据,但是满足条件的数据却只有1126条,同时选择了index only scan的方式,单次执行耗时4.744毫秒。

这是从执行计划看到的基本信息,能优化的突破口基本也就是对于表的访问能不能更快?虽然这里选择的是index only scan的方式。

三 解决问题

先尝试重新收集表的统计信息,再查看其执行计划是否有好转?

zyd=> analyze verbose t_report_temp;
INFO:  analyzing "zyd.t_report_temp"
INFO:  "t_report_temp": scanned 9494 of 9494 pages, containing 366132 live rows and 1345 dead rows; 30000 rows in sample, 366132 estimated total rows
ANALYZE
zyd=> analyze verbose t_report_type;
INFO:  analyzing "zyd.t_report_type"
INFO:  "t_report_type": scanned 12 of 12 pages, containing 459 live rows and 0 dead rows; 459 rows in sample, 459 estimated total rows
ANALYZE
zyd=>
...查看执行计划
...
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                                       QUERY PLAN                                                                       │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Sort  (cost=3179.42..3179.93 rows=204 width=165) (actual time=1973.808..1973.836 rows=383 loops=1)                                                     │
│   Sort Key: t.code                                                                                                                                     │
│   Sort Method: quicksort  Memory: 126kB                                                                                                                │
│   ->  Nested Loop Semi Join  (cost=0.55..3171.59 rows=204 width=165) (actual time=29.583..1973.127 rows=383 loops=1)                                   │
│         ->  Seq Scan on t_report_type t  (cost=0.00..16.59 rows=459 width=165) (actual time=0.002..0.175 rows=459 loops=1)                             │
│         ->  Index Only Scan using t_report_temp_01 on t_report_temp  (cost=0.55..1873.80 rows=428 width=7) (actual time=4.297..4.297 rows=1 loops=459) │
│               Index Cond: ((corp_id IS NULL) AND (partner_id IS NULL) AND (report_type = (t.code)::text))                                              │
│               Heap Fetches: 53                                                                                                                         │
│ Total runtime: 1973.885 ms                                                                                                                             │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
(9 rows)

zyd=>

发现,并没有好转。观察到其执行计划中,对于相对大的那个表t_report_temp,优化器选择了通过t_report_temp_01这个index来做的index only scan,每次耗时4毫秒左右,但是loops=459,累积起来就1800多毫秒了。

同时,该indext_report_temp_01是一个联合index,基于(corp_id, industry_id, partner_id, report_code, report_type, report_name) 6个字段创建的联合index,观察表和该index的大小:

zyd=> \dt+ t_report_temp
                        List of relations
┌────────┬───────────────┬───────┬───────┬───────┬──────────────┐
│ Schema │     Name      │ Type  │ Owner │ Size  │ Description  │
├────────┼───────────────┼───────┼───────┼───────┼──────────────┤
│ zyd    │ t_report_temp │ table │ zyd   │ 74 MB │ 报表模板主表 │
└────────┴───────────────┴───────┴───────┴───────┴──────────────┘
(1 row)

zyd=> \di+ t_report_temp_01
                                 List of relations
┌────────┬──────────────────┬───────┬───────┬───────────────┬───────┬─────────────┐
│ Schema │       Name       │ Type  │ Owner │     Table     │ Size  │ Description │
├────────┼──────────────────┼───────┼───────┼───────────────┼───────┼─────────────┤
│ zyd    │ t_report_temp_01 │ index │ zyd   │ t_report_temp │ 80 MB │             │
└────────┴──────────────────┴───────┴───────┴───────────────┴───────┴─────────────┘
(1 row)

zyd=>

发现,这个index 80MB,比表本身74MB还大。是不是推断出,此种情况下,优化器选择通过该index来获取数据不够好呢,或者说还不如直接扫描这个表呢?

开始验证:

zyd=> set enable_indexscan =off;
SET
zyd=> set enable_indexonlyscan =off;
SET
zyd=> set enable_bitmapscan =off;
SET
zyd=> explain analyze..
..
..
┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                             QUERY PLAN                                                              │
├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Merge Join  (cost=13420.55..13423.58 rows=204 width=165) (actual time=69.095..69.417 rows=383 loops=1)                              │
│   Merge Cond: ((t.code)::text = (t_report_temp.report_type)::text)                                                                  │
│   ->  Sort  (cost=36.88..38.03 rows=459 width=165) (actual time=0.513..0.569 rows=459 loops=1)                                      │
│         Sort Key: t.code                                                                                                            │
│         Sort Method: quicksort  Memory: 145kB                                                                                       │
│         ->  Seq Scan on t_report_type t  (cost=0.00..16.59 rows=459 width=165) (actual time=0.002..0.121 rows=459 loops=1)          │
│   ->  Sort  (cost=13383.67..13383.79 rows=49 width=7) (actual time=68.576..68.600 rows=383 loops=1)                                 │
│         Sort Key: t_report_temp.report_type                                                                                         │
│         Sort Method: quicksort  Memory: 42kB                                                                                        │
│         ->  HashAggregate  (cost=13381.80..13382.29 rows=49 width=7) (actual time=68.253..68.309 rows=383 loops=1)                  │
│               ->  Seq Scan on t_report_temp  (cost=0.00..13163.63 rows=87269 width=7) (actual time=0.005..67.787 rows=1126 loops=1) │
│                     Filter: ((corp_id IS NULL) AND (partner_id IS NULL))                                                            │
│                     Rows Removed by Filter: 365114                                                                                  │
│ Total runtime: 69.480 ms                                                                                                            │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
(14 rows)

zyd=> 

此时,可以看到,当我们禁用index only scan,index scan,bitmap index scan之后,使优化器对于表t_report_temp选择全表扫描之后,整个SQL的执行效率反而更好。
于是,尝试在(report_type,corp_id,partner_id)创建联合index,观察一下SQL效率:

zyd=> create index CONCURRENTLY idx_3 on t_report_temp(report_type,corp_id,partner_id);
CREATE INDEX
zyd=> explain analyze ....
..
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                                QUERY PLAN                                                                 │
├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Sort  (cost=267.54..268.05 rows=204 width=165) (actual time=2.525..2.554 rows=383 loops=1)                                                │
│   Sort Key: t.code                                                                                                                        │
│   Sort Method: quicksort  Memory: 126kB                                                                                                   │
│   ->  Nested Loop Semi Join  (cost=0.42..259.72 rows=204 width=165) (actual time=0.041..2.083 rows=383 loops=1)                           │
│         ->  Seq Scan on t_report_type t  (cost=0.00..16.59 rows=459 width=165) (actual time=0.002..0.053 rows=459 loops=1)                │
│         ->  Index Only Scan using idx_3 on t_report_temp  (cost=0.42..29.58 rows=428 width=7) (actual time=0.004..0.004 rows=1 loops=459) │
│               Index Cond: ((report_type = (t.code)::text) AND (corp_id IS NULL) AND (partner_id IS NULL))                                 │
│               Heap Fetches: 27                                                                                                            │
│ Total runtime: 2.600 ms                                                                                                                   │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
(9 rows)

zyd=>

发现,此时SQL的执行耗时从之前的2秒降低到2.6毫秒。

四 小结

通过深入分析SQL执行计划、结合表结构、数据量及数据分布,是做好SQL优化的基本出发点。需要更为深入学习和分析SQL执行计划,能看懂SQL执行计划,对SQL执行计划有感觉,多看多读执行计划,多思考。

PostgreSQL官方文档中,关于执行计划描述,有这么一句话:

Plan-reading is an art that requires some experience to master, but this section attempts to cover the basics.

PostgreSQL数据库psql客户端工具的设置

在使用psql客户端工具来管理、维护PostgreSQL数据库的过程中,经常遇到一些格式的问题:输出结果被折叠覆盖。对于熟悉Oracle数据库的sql*plus客户端工具的DBA来说,这简直是梦魇。这里,收录一些psql客户端工具的设置。

1 输出结果被折叠覆盖

ai_task1=> \d t_ai_prd_task_field                
                  Table "ai_task1.t_ai_prd_task_field"
          Column           |              Type              | Modifiers 
---------------------------+--------------------------------+-----------
 id                        | character varying(32)          | not null
 task_id                   | character varying(32)          | 
 job_field_id              | character varying(32)          | 
 item_id                   | character varying(32)          | 
 ...
 ...
 sort_no                   | smallint                       | 
 template_display_name     | character varying(128)         | 
Indexes:
    "t_ai_prd_task_field_pkey" PRIMARY KEY, btree (id)
    "idx_task_field_applicant_name" btree (applicant_name)
    "idx_task_field_bill_code" btree (bill_code)
    "idx_task_field_item_id" btree (item_id)
    "idx_task_field_job_field_id" btree (job_field_id)
    "idx_task_field_job_id" btree (job_id)
    "idx_task_field_packet_code" btree (packet_code)
    "idx_task_field_status" btree (status)
    "idx_task_field_submit_time" btree (submit_time)
    "idx_task_field_task_id" btree (task_id)
tegory, cust_category, template_name, input_source)name, applicant_name, task_ca--More--

ai_task1=>

查看表结构时,后面带出index信息中,对于最后一行,完全不知道索引的相关信息,取而代之的是–More–,这不要命么?可以通过在命令行上设置,\pset pager off。

ai_task1=> \pset pager off
Pager usage is off.
ai_task1=> \d t_ai_prd_task_field                
                  Table "ai_task1.t_ai_prd_task_field"
          Column           |              Type              | Modifiers 
---------------------------+--------------------------------+-----------
 id                        | character varying(32)          | not null
 task_id                   | character varying(32)          | 
 job_field_id              | character varying(32)          | 
 item_id                   | character varying(32)          | 
 ...
 ...
 sort_no                   | smallint                       | 
 template_display_name     | character varying(128)         | 
Indexes:
    "t_ai_prd_task_field_pkey" PRIMARY KEY, btree (id)
    "idx_task_field_applicant_name" btree (applicant_name)
    "idx_task_field_bill_code" btree (bill_code)
    "idx_task_field_item_id" btree (item_id)
    "idx_task_field_job_field_id" btree (job_field_id)
    "idx_task_field_job_id" btree (job_id)
    "idx_task_field_packet_code" btree (packet_code)
    "idx_task_field_status" btree (status)
    "idx_task_field_submit_time" btree (submit_time)
    "idx_task_field_task_id" btree (task_id)
    "idx_task_field_union_gin" gin (template_field_name, applicant_name, task_category, cust_category, template_name, input_source)

ai_task1=>

瞬间变得神清气爽,清清楚楚。

2 设置输出回显的linestyle和边框

ai_task1=> \pset border 2
Border style is 2.
ai_task1=> \pset linestyle unicode 
Line style is unicode.
ai_task1=> \d t_07 
                      Table "ai_task1.t_07"
┌───────────────────────────┬────────────────────────┬───────────┐
│          Column           │          Type          │ Modifiers │
├───────────────────────────┼────────────────────────┼───────────┤
│ bill_code                 │ character varying(12)  │           │
│ template_no               │ character varying(32)  │           │
│ template_field_identifier │ character varying(64)  │           │
│ company_name              │ character varying(255) │           │
│ submit_value              │ character varying      │           │
└───────────────────────────┴────────────────────────┴───────────┘

ai_task1=>

整齐划一、干净好看。

 

3 换行截断

postgres=# select * from t;
 id |                                                                                                                                                                                                                                            content                                                                                                                                                                                                                                             
----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  1 | If you are used to psql, you may notice in the picture above, some content is wrapped. This is \pset format wrapped option.
  2 | Some commands take an SQL identifier (such as a table name) as argument. These arguments follow the syntax rules of SQL: Unquoted letters are forced to lowercase, while double quotes (") protect letters from case conversion and allow incorporation of whitespace into the identifier. Within double quotes, paired double quotes reduce to a single double quote in the resulting name. For example, FOO"BAR"BAZ is interpreted as fooBARbaz, and "A weird"" name" becomes A weird" name.+
    |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               +
    | 
(2 rows)

postgres=#

 

对于terminal window比较小的场景下,这个查询结果并不够友好。可以,通过设置:\pset format wrapped

postgres=# \pset format wrapped 
Output format is wrapped.
postgres=# select * from t;     
 id |                                       content                                        
----+--------------------------------------------------------------------------------------
  1 | If you are used to psql, you may notice in the picture above, some content is wrappe.
    |.d. This is \pset format wrapped option.
  2 | Some commands take an SQL identifier (such as a table name) as argument. These argum.
    |.ents follow the syntax rules of SQL: Unquoted letters are forced to lowercase, while.
    |. double quotes (") protect letters from case conversion and allow incorporation of w.
    |.hitespace into the identifier. Within double quotes, paired double quotes reduce to .
    |.a single double quote in the resulting name. For example, FOO"BAR"BAZ is interpreted.
    |. as fooBARbaz, and "A weird"" name" becomes A weird" name.                          +
    |                                                                                     +
    | 
(2 rows)

postgres=#

4 设置编辑器

通过设置执行psql命令的用户的环境变量EDITOR=vi,可以在psql命令行上通过执行\e 来调用vi编辑SQL语句。需要注意的是:

a \e 只能编辑修改最后1条SQL语句;

b \e 不能编辑从命令行读取的SQL脚本中的命令,这个有别于Oracle数据库。

c  \e 如果出现E437: terminal capability “cm” required时,有可能是用户的环境变量TERM没设置,修改一下即可。

[enterprisedb@ppasdev ~]$ export EDITOR=vi
[enterprisedb@ppasdev ~]$ export TERM=xterm
[enterprisedb@ppasdev ~]$ sh local 
psql.bin (9.3.11.33)
Type "help" for help.

postgres=# select * from t;
..
..
postgres=# \e
调用vi编辑SQL语句。

5 小结

可以把关于psql的设置,配置到需要执行psql命令行工具用户的家目录下的名为.psqlrc文件中:

[enterprisedb@ecs-public2 ~]$ cat ~/.psqlrc 
\pset pager off
\pset border 2
\pset linestyle unicode 
\pset format wrapped 
[enterprisedb@ecs-public2 ~]$

可以把关于环境变量的设置,配置到用户的家目录下的.bash_profile中:

[enterprisedb@ecs-public2 ~]$ cat ~/.bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs
PATH=$PATH:$HOME/bin:/ppas/9.3as/bin

export PATH

export LANG=en_US.UTF-8
export PGDATA=/ppas/9.3as/data
export EDITOR=vi
export TERM=xterm
[enterprisedb@ecs-public2 ~]$

这些相关设置,更为具体的可以参考官方文档:https://www.postgresql.org/docs/9.1/static/app-psql.html

这里,只是做个简单记录,解决我个人目前遇到的几个棘手问题,后续再读文档时更新本文。

PostgreSQL数据库pg_hba.conf文件学习笔记

对于PostgreSQL数据库的pg_hba.conf文件从以下几个方面来学习。

1 pg_hba.conf的作用

pg_hba.conf文件是数据库判断客户端/应用程序能否正常访问数据库的唯一依据。(HBA stands for host-based authentication.)

略微牵强一点儿的横向比较的话,其作用类似于Oracle数据库的listener.ora文件。

pg的客户端访问数据库的时候,需要提供数据库服务器主机名或者IP地址,端口号,数据库名,数据库用户名,以及数据库用户密码或者数据库认证方式。而这些信息是存在于pg_hba.conf文件中的。同样,Oracle客户端访问数据库的过程中,也需要提供Oracle数据库主机名或者IP地址,端口号,访问协议,以及服务名。而这些信息是记录在listener.ora文件中的。

2 pg_hba.conf的位置

默认情况下,该文件位于数据库服务器的data路径下,名为pg_hba.conf。当然,这些都不是绝对的,尤其对于Unix/Linux环境下,文件的扩展名只是一种约定的惯例,并不像windows平台下,但凡.exe代表可执行文件等。

可以在psql命令行中,通过超级用户执行show hba_file来获取pg_hba.conf的具体路径。

postgres=# show hba_file ;
              hba_file               
-------------------------------------
 /orabackup/pg9.6.4/data/pg_hba.conf
(1 row)

postgres=#

如果是没有权限的普通用户执行该命令,可能会有下述错误:

ai=> show hba_file;
ERROR:  must be superuser to examine "hba_file"
ai=>

 

如果需要修改或者调整配置文件pg_hba.conf的路径的话,需要通过修改postgresql.conf文件中hba_file参数来实现,并且要提前做好pg_hba.conf文件的备份,以及需要重启数据库才能使修改生效。

3 pg_hba.conf的内容

  • 该文件是一个普通的文本类型文件,可以直接用文本编辑器编辑修改;
  • 以#开头行的是注释行;
  • 以行为单位,每一行是一个有效的记录,不允许不支持跨行的记录存在;
  • 每一个生效的行记录都包含:连接类型、数据库名、用户名、允许的客户端IP地址或地址段、认证方式,这5个字段。且,各字段之间以空格分开。类似如下:
# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     trust
# IPv4 local connections:
host    all             all             127.0.0.1/32            trust
host    all             all             172.0.0.0/8             md5

这里local开头的行,表示的是数据库允许通过主机服务器的Unix socket通信来访问和连接数据库。如果没有local开头的行,则意味着数据库不接受socket通信的连接访问方式。说白了,就是没有这一行的话,或注释掉改行的话,则无法通过简单的psql来访问数据库:

[postgres@localhost data]$ psql
psql: FATAL:  no pg_hba.conf entry for host "[local]", user "postgres", database "postgres"
[postgres@localhost data]$

第1个host开头的行,意味着,允许接受客户端为数据库主机环路地址127.0.0.1上,通过提供数据库IP地址,端口,数据库名、用户名的方式来访问数据库,认证方式为trust,即不需要提供密码的方式来访问数据库。且,可以以数据库中已经存在的任意用户来访问任意数据库,如:

[postgres@localhost data]$ psql -h localhost -p 5432 -d postgres -U postgres
psql (9.6.4)
Type "help" for help.

postgres=# \q
[postgres@localhost data]$ psql -h 127.0.0.1 -p 5432 -d postgres -U ai
psql (9.6.4)
Type "help" for help.

postgres=>

第2个host开头的行,意味着,允许从IP范围是172.0.0.0到172.254.254.254之间的任意地址,以数据库中已经存在的用户,发起到任意数据库的连接请求,但是必须要提供用户口令,因为认证方式是是通过md5加密的。如:

[postgres@localhost data]$ psql -h 172.16.11.35 -p 5432 -d postgres -U enterprisedb
Password for user enterprisedb: 
psql (9.6.4, server 9.3.11.33)
Type "help" for help.

postgres=#

从IP为172.18.1.12的客户端,发起到172.16.11.35服务端的连接请求,在提供名为enterprisedb的数据库用户密码之后,顺利连接到服务器。

  • 连接类型,除了local、host之外,还有hostssl和hostnossl方式;
  • 数据库和用户名字段的all,表示可以访问任意数据库,以及以任意数据库用户发起对于数据库的访问,如果需要对于特定的数据库和特定的用户作限制的话,则应该明确写出库名和数据库用户名;
  • 数据库在做连接验证的过程中,对于该文件是从上向下查找的,如果找到一条记录匹配客户端发起的数据库连接请求,则参照该记录来验证,不再继续向下找,验证成功则访问数据库成功,反之失败,如下:

pg_hba.conf含有下述2条记录:

# IPv4 local connections:
host    all             all             127.0.0.1/32            md5 
host    all             all             127.0.0.1/32            trust

那么,此时,数据库选择第一条记录来验证,就必须的正确的输入数据库用户postgres的口令,否则会报错。如下:

[postgres@localhost data]$ psql -h 127.0.0.1 -p 5432 -d postgres -U postgres
Password for user postgres: 
psql: FATAL:  password authentication failed for user "postgres"
[postgres@localhost data]$

显然,对于这次验证失败,数据库并没有选择第2条验证方位为trust的无需密码的方式来验证;

  • 对于数据库名字段上,如果需要匹配多个数据库的话,可以用逗号分隔开多个数据库的库名;
  • 数据库字段名上,如果出现replication的话,并不表示对于名为replication的数据库名授权验证,而是验证一个来自物理备库的连接请求;
  • address字段上,如下:
172.20.143.89/32 for a single host,只匹配172.20.143.89这一个IP地址;
172.20.143.0/24 for a small network,匹配从172.20.143.0到172.20.143.254这个地址段;
10.6.0.0/16 for a larger one,匹配从10.6.0.0到10.6.254.254地址段;
::1/128 for a single host ,匹配IPv6的本地环路地址,等价于IPv4的127.0.0.1;
0.0.0.0/0 represents all IPv4 addresses,匹配所有IPv4的地址;
::0/0 represents all IPv6 addresses,匹配所有IPv6地址;

4 修改pg_hba.conf的内容

通过文本编辑器,按照其固定格式,直接修改完该文件之后,需要执行pg_ctl reload使其生效。

5 个人收获

  • 从基础知识点上搞清楚了该文件的作用;
  • 明白了该文件中local那一行的真正具体含义;
  • 学会了该文件和postgrres.conf文件中的listen_address参数一起搭配来控制,通过网络来访问数据库的控制和限制;如果listen_addresses =’localhost’时,则该数据库不允许任何通过网络方式来的异机访问,只允许通过数据库服务器本机连接本机上的数据库;
  • 学到了一点儿关于网络掩码的设置,如172.20.143.0/24的具体含义。

更为详细的信息,可以进一步查看官方文档:https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html

PostgreSQL数据库中explain和explain analyze的区别

对于PostgreSQL数据库,可以直接在命令行上使用“explain your_statements”的方式来查看SQL语句的执行计划。

更为具体详细的用法,可以在psql命令行上使用\h explain来查看获取更为具体用法。

这里,需要注意的是,explain analyze 会分析SQL语句的执行计划,并真正执行SQL语句。而,默认情况下,PostgreSQL数据库的事务是自动开启、提交的。那么,对于DML语句来说,如果要使用explain analyze来查看其执行计划的话,就得格外注意了。如下:

postgres=# \c ai ai
You are now connected to database "ai" as user "ai".
ai=> \d p_table
           Table "public.p_table"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 id     | integer               | not null
 name   | character varying(32) | 
Indexes:
    "p_table_pkey" PRIMARY KEY, btree (id)
Referenced by:
    TABLE "c_table" CONSTRAINT "c_table_id_fkey" FOREIGN KEY (id) REFERENCES p_table(id) ON UPDATE CASCADE ON DELETE CASCADE

ai=> select * from p_table ;
 id |    name    
----+------------
  3 | onlyou.com
(1 row)

ai=> select * from c_table ;
 id |  info  
----+--------
  3 | xiamen
(1 row)

ai=> explain delete from p_table ;
                           QUERY PLAN                           
----------------------------------------------------------------
 Delete on p_table  (cost=0.00..17.10 rows=710 width=6)
   ->  Seq Scan on p_table  (cost=0.00..17.10 rows=710 width=6)
(2 rows)

ai=> select * from p_table ;
 id |    name    
----+------------
  3 | onlyou.com
(1 row)

ai=> explain analyze delete from p_table ;
                                                QUERY PLAN                                                
----------------------------------------------------------------------------------------------------------
 Delete on p_table  (cost=0.00..17.10 rows=710 width=6) (actual time=0.074..0.074 rows=0 loops=1)
   ->  Seq Scan on p_table  (cost=0.00..17.10 rows=710 width=6) (actual time=0.013..0.014 rows=1 loops=1)
 Planning time: 0.050 ms
 Trigger for constraint c_table_id_fkey: time=2.637 calls=1
 Execution time: 2.735 ms
(5 rows)

ai=> select * from p_table ;
 id | name 
----+------
(0 rows)

ai=>

explain analyze delete from p_table ;查看一条删除操作SQL的执行计划,其结果就是,真正的把表里的数据删除了,且提交了。
对于所有的DML操作,如果只是想通过explain analyze来查看其执行计划的话,需要在执行之前,显示开启事务,然后查看执行计划,最后回滚事务。

postgres=# begin;
BEGIN
postgres=# explain analyze xxxxxxxxx;
postgres=# rollback;
ROLLBACK
postgres=#

如何在Linux上源码安装PostgreSQL数据库?

零 说明

本文记录在CentOS/RHEL Linux平台上通过源码安装PostgreSQL数据库、配置数据库的过程和步骤,及注意事项。

一 前提条件

要求OS上已经安装下述软件。如果没有,可以直接通过yum来安装。

1 gmake或者make要求至少3.80版本以上

[root@localhost ~]# make -v
GNU Make 3.82
Built for x86_64-redhat-linux-gnu
Copyright (C) 2010  Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
[root@localhost ~]# gmake -v
GNU Make 3.82
Built for x86_64-redhat-linux-gnu
Copyright (C) 2010  Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
[root@localhost ~]# which make
/usr/bin/make
[root@localhost ~]# which gmake
/usr/bin/gmake
[root@localhost ~]# ll /usr/bin/make
-rwxr-xr-x. 1 root root 182776 11月  6 2016 /usr/bin/make
[root@localhost ~]# ll /usr/bin/gmake
lrwxrwxrwx. 1 root root 4 1月   8 2018 /usr/bin/gmake -> make
[root@localhost ~]#

这里,看到gmake版本为3.82。gmake是一个链接文件,其指向了make。其实,在CentOS/RHEL平台上,默认情况下,gmake等同于make。

2 C编译器

[root@localhost ~]# which gcc
/usr/bin/gcc
[root@localhost ~]# gcc -v
使用内建 specs。
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper
目标:x86_64-redhat-linux
配置为:../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-initfini-array --disable-libgcj --with-isl=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/isl-install --with-cloog=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/cloog-install --enable-gnu-indirect-function --with-tune=generic --with-arch_32=x86-64 --build=x86_64-redhat-linux
线程模型:posix
gcc 版本 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) 
[root@localhost ~]#

如果没有的话,可以直接安装一个最新的gcc即可。

3 tar

[root@localhost ~]# which tar
/usr/bin/tar
[root@localhost ~]# tar --version
tar (GNU tar) 1.26
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
这是自由软件:您可以自由地更改并重新发布它。
在法律所允许的范围内不含任何担保。

由 John Gilmore 和 Jay Fenlason 所写。
[root@localhost ~]#

用于解压缩源码格式的安装压缩包文件。

 如果下载的源文件格式类似于postgresql-9.4.10.tar.gz,则用tar -zxvf postgresql-9.4.10.tar.gz解压;
如果下载的源文件格式类似于postgresql-9.4.10.tar.bz2,则用tar jxvf postgresql-9.4.10.tar.bz2解压;
注意,第二种格式文件的解压,命令行选项中不带减号-。​

4 GNU readline library

该库文件默认启用。用于在psql命令行下,可以通过键盘的上下箭头调出历史命令以及编辑之前的命令。如果不需要此功能的话,可以在configure的时候,带上--without-readline选项。

zlib compression library

该库文件默认启用。启用该选项则意味着,使用pg_dump/pg_restore对数据库进行备份/恢复的时候,不支持压缩???如果不需要此功能的话,可以在configure的时候,带上–without-zlib选项。

经过我初步验证测试,暂时还没搞清楚该库文件的具体功能和作用。暂时记下这个问题,留待我后续进一步验证一下???

https://dan.langille.org/2013/06/10/using-compression-with-postgresqls-pg_dump/

http://peter.eisentraut.org/blog/2015/12/07/check-your-pg-dump-compression-levels/

二 源码安装

1 创建postgres用户

[root@dev-middleware ~]# groupadd postgres
[root@dev-middleware ~]# useradd -g postgres postgres
[root@dev-middleware ~]# passwd postgres
更改用户 postgres 的密码 。
新的 密码:
无效的密码: 它基于字典单词
重新输入新的 密码:
passwd:所有的身份验证令牌已经成功更新。
[root@dev-middleware ~]#

2 创建数据库安装路径

假定将来要把数据库软件安装在/data/postgres/10.4/路径下,数据库的数据存放在/data/postgres/10.4/data路径下。
这里,先把/data/postgres/10.4/路径创建出来即可,/data/postgres/10.4/data路径不需提前创建。

[root@dev-middleware ~]# mkdir -p /data/postgres/10.4/
[root@dev-middleware ~]# chown -R postgres:postgres /data/postgres/10.4/
[root@dev-middleware ~]#

3 获取源码

到PostgreSQL官方网站,获取源码格式的数据库安装文件。

[root@dev-middleware ~]# su - postgres
[postgres@dev-middleware ~]$ pwd
/home/postgres
[postgres@dev-middleware ~]$ wget https://ftp.postgresql.org/pub/source/v10.4/postgresql-10.4.tar.gz
--2018-08-06 11:25:51--  https://ftp.postgresql.org/pub/source/v10.4/postgresql-10.4.tar.gz
正在解析主机 ftp.postgresql.org (ftp.postgresql.org)... 87.238.57.227, 217.196.149.55, 204.145.124.244, ...
正在连接 ftp.postgresql.org (ftp.postgresql.org)|87.238.57.227|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:26652442 (25M) [application/x-gzip]
正在保存至: “postgresql-10.4.tar.gz”

100%[======================================================================================================================================================================================================>] 26,652,442  5.83MB/s 用时 6.4s   

2018-08-06 11:26:03 (3.98 MB/s) - 已保存 “postgresql-10.4.tar.gz” [26652442/26652442])

[postgres@dev-middleware ~]$ ll
总用量 26028
-rw-rw-r-- 1 postgres postgres 26652442 5月   8 05:06 postgresql-10.4.tar.gz
[postgres@dev-middleware ~]$

4 解压源码

如果下载的源码文件格式为类似于postgresql-9.4.10.tar.bz2,则用tar jxvf postgresql-9.4.10.tar.bz2解压。

解压后,会生成一个新的postgresql-10.4路径,该路径下存放的就是源码格式的安装文件。

[postgres@dev-middleware ~]$ pwd
/home/postgres
[postgres@dev-middleware ~]$ ll
总用量 26028
-rw-rw-r-- 1 postgres postgres 26652442 5月   8 05:06 postgresql-10.4.tar.gz
[postgres@dev-middleware ~]$ 
[postgres@dev-middleware ~]$ tar -zxvf postgresql-10.4.tar.gz 
...
...
postgresql-10.4/GNUmakefile.in
postgresql-10.4/.gitattributes
postgresql-10.4/aclocal.m4
postgresql-10.4/configure.in
postgresql-10.4/INSTALL
[postgres@dev-middleware ~]$

5 执行configure

执行configure的命令行选项,–prefix参数,表示把PostgreSQL安装在哪个路径下。默认情况下,不带该参数时,则会安装在/usr/local/pgsql路径下。

[postgres@dev-middleware ~]$ pwd
/home/postgres
[postgres@dev-middleware ~]$ ll
总用量 26032
drwxrwxr-x 6 postgres postgres     4096 5月   8 05:06 postgresql-10.4
-rw-rw-r-- 1 postgres postgres 26652442 5月   8 05:06 postgresql-10.4.tar.gz
[postgres@dev-middleware ~]$ cd postgresql-10.4
[postgres@dev-middleware postgresql-10.4]$ ll
总用量 676
-rw-r--r--  1 postgres postgres    457 5月   8 04:51 aclocal.m4
drwxrwxr-x  2 postgres postgres   4096 5月   8 05:03 config
-rwxr-xr-x  1 postgres postgres 498962 5月   8 04:51 configure
-rw-r--r--  1 postgres postgres  76909 5月   8 04:51 configure.in
drwxrwxr-x 55 postgres postgres   4096 5月   8 05:03 contrib
-rw-r--r--  1 postgres postgres   1192 5月   8 04:51 COPYRIGHT
drwxrwxr-x  3 postgres postgres   4096 5月   8 05:03 doc
-rw-r--r--  1 postgres postgres   3638 5月   8 04:51 GNUmakefile.in
-rw-r--r--  1 postgres postgres    284 5月   8 04:51 HISTORY
-rw-r--r--  1 postgres postgres  71584 5月   8 05:06 INSTALL
-rw-r--r--  1 postgres postgres   1682 5月   8 04:51 Makefile
-rw-r--r--  1 postgres postgres   1212 5月   8 04:51 README
drwxrwxr-x 16 postgres postgres   4096 5月   8 05:06 src
[postgres@dev-middleware postgresql-10.4]$ ./configure --prefix=/data/postgres/10.4/
...
...
config.status: linking src/include/port/linux.h to src/include/pg_config_os.h
config.status: linking src/makefiles/Makefile.linux to src/Makefile.port
[postgres@dev-middleware postgresql-10.4]$

6 执行make world

这里,也可以简单执行make就可以build安装文件的,但是,make world的意思是指把PostgreSQL相关的文档,HTML,以及其它的一些模块都会一起编译进去。比如,在有的环境下遇到无法使用uuid或者使用不了gin index的原因,就是在编译的时候,没有包含这些对应的模块。处理这种的问题的方法也不复杂,只需要进到当时安装时的那个源文件路径下,重新执行make world,然后make install-world。

建议,在初始安装的时候,就直接用make world。

[postgres@dev-middleware postgresql-10.4]$ make world
..
..
make[2]: 离开目录“/home/postgres/postgresql-10.4/contrib/vacuumlo”
make[1]: 离开目录“/home/postgres/postgresql-10.4/contrib”
PostgreSQL, contrib, and documentation successfully made. Ready to install.
[postgres@dev-middleware postgresql-10.4]$

7 执行make install-world

[postgres@dev-middleware postgresql-10.4]$ make install-world 
...
...
/bin/mkdir -p '/data/postgres/10.4/bin'
/bin/install -c  vacuumlo '/data/postgres/10.4/bin'
make[2]: 离开目录“/home/postgres/postgresql-10.4/contrib/vacuumlo”
make[1]: 离开目录“/home/postgres/postgresql-10.4/contrib”
PostgreSQL, contrib, and documentation installation complete.
[postgres@dev-middleware postgresql-10.4]$

8 初始化数据库

[postgres@dev-middleware postgresql-10.4]$ /data/postgres/10.4/bin/initdb -D /data/postgres/10.4/data
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "zh_CN.UTF-8".
The default database encoding has accordingly been set to "UTF8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".

Data page checksums are disabled.

creating directory /data/postgres/10.4/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /data/postgres/10.4/bin/pg_ctl -D /data/postgres/10.4/data -l logfile start

[postgres@dev-middleware postgresql-10.4]$

9 启动数据库

[postgres@dev-middleware postgresql-10.4]$ cd
[postgres@dev-middleware ~]$ ps -ef|grep postgres
root     19521 10727  0 11:25 pts/0    00:00:00 su - postgres
postgres 19522 19521  0 11:25 pts/0    00:00:00 -bash
postgres 19882 19522  0 17:39 pts/0    00:00:00 ps -ef
postgres 19883 19522  0 17:39 pts/0    00:00:00 grep --color=auto postgres
[postgres@dev-middleware ~]$  /data/postgres/10.4/bin/pg_ctl -D /data/postgres/10.4/data -l /home/postgres/pg.log start
waiting for server to start.... done
server started
[postgres@dev-middleware ~]$ ps -ef|grep postgres
root     19521 10727  0 11:25 pts/0    00:00:00 su - postgres
postgres 19522 19521  0 11:25 pts/0    00:00:00 -bash
postgres 21158     1  0 17:39 pts/0    00:00:00 /data/postgres/10.4/bin/postgres -D /data/postgres/10.4/data
postgres 21160 21158  0 17:39 ?        00:00:00 postgres: checkpointer process   
postgres 21161 21158  0 17:39 ?        00:00:00 postgres: writer process   
postgres 21162 21158  0 17:39 ?        00:00:00 postgres: wal writer process   
postgres 21163 21158  0 17:39 ?        00:00:00 postgres: autovacuum launcher process   
postgres 21164 21158  0 17:39 ?        00:00:00 postgres: stats collector process   
postgres 21165 21158  0 17:39 ?        00:00:00 postgres: bgworker: logical replication launcher   
postgres 21168 19522  0 17:39 pts/0    00:00:00 ps -ef
postgres 21169 19522  0 17:39 pts/0    00:00:00 grep --color=auto postgres
[postgres@dev-middleware ~]$

10 修改环境变量

修改postgres用户的~/.bash_profile,PATH添加/data/postgres/10.4/bin,同时,添加PGDATA=/data/postgres/10.4/data环境变量。

PATH=$PATH:$HOME/.local/bin:$HOME/bin:/data/postgres/10.4/bin

export PATH

PGDATA=/data/postgres/10.4/data
export PGDATA

使其环境变量生效:

[postgres@dev-middleware ~]$ source ~/.bash_profile 
[postgres@dev-middleware ~]$

11 修改数据库参数

a 修改pg_hba.conf文件

在/data/postgres/10.4/data/pg_hba.conf文件中,添加下面的一行:

# IPv4 local connections:
host    all             all             0.0.0.0/0               md5

其,表示当前数据库服务器,对使用IPV4地址的客户端机器,允许其通过任意IP地址,通过任意数据库用户发起对该数据库服务器上的任意数据库的访问,但是必须得提供数据库用户密码。其认证方式,是通过MD5加密认证的。

一句话简介:对于使用IPV4地址的客户端,数据库服务器不做限制,可以通过任意用户访问所有数据库。

b 修改postgresql.conf 文件

在/data/postgres/10.4/data/postgresql.conf 文件中,修改#listen_addresses = ‘localhost’为#listen_addresses = ‘*’。

然后,reload使其生效。

原因是,如果不修改的话,则客户端只能通过数据库服务器的环路地址127.0.0.1来访问数据库,不能使用服务器的公网IP地址来访问。

如下,修改之前:

[postgres@dev-middleware ~]$ psql
psql (10.4)
Type "help" for help.

postgres=# show listen_addresses ;
 listen_addresses 
------------------
 localhost
(1 row)

postgres=# \q
[postgres@dev-middleware ~]$ psql -h 127.0.0.1 -p 5432 -d postgres
psql (10.4)
Type "help" for help.

postgres=# show listen_addresses ;
 listen_addresses 
------------------
 localhost
(1 row)

postgres=# \q
[postgres@dev-middleware ~]$ psql -h 172.16.11.44 -p 5432 -d postgres
psql: could not connect to server: 拒绝连接
        Is the server running on host "172.16.11.44" and accepting
        TCP/IP connections on port 5432?
[postgres@dev-middleware ~]$

修改listen_addresses = ‘*’,且重启数据库使其生效之后:

[postgres@dev-middleware ~]$ vi /data/postgres/10.4/data/postgresql.conf 
...
...

listen_addresses = '*'         # what IP address(es) to listen on;
                                        # comma-separated list of addresses;
                                        # defaults to 'localhost'; use '*' for all
                                        # (change requires restart)
#port = 5432                            # (change requires restart)
max_connections = 100                   # (change requires restart)
#superuser_reserved_connections = 3     # (change requires restart)
#unix_socket_directories = '/tmp'       # comma-separated list of directories
                                        # (change requires restart)
#unix_socket_group = ''                 # (change requires restart)
#unix_socket_permissions = 0777         # begin with 0 to use octal notation
                                        # (change requires restart)
"/data/postgres/10.4/data/postgresql.conf" 658L, 22755C written
[postgres@dev-middleware ~]$ pg_ctl restart -m fast
waiting for server to shut down....2018-08-06 18:31:40.294 CST [12290] LOG:  received fast shutdown request
2018-08-06 18:31:40.306 CST [12290] LOG:  aborting any active transactions
2018-08-06 18:31:40.307 CST [12290] LOG:  worker process: logical replication launcher (PID 12297) exited with exit code 1
2018-08-06 18:31:40.314 CST [12292] LOG:  shutting down
2018-08-06 18:31:40.347 CST [12290] LOG:  database system is shut down
 done
server stopped
waiting for server to start....2018-08-06 18:31:40.427 CST [14493] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2018-08-06 18:31:40.427 CST [14493] LOG:  listening on IPv6 address "::", port 5432
2018-08-06 18:31:40.433 CST [14493] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-08-06 18:31:40.487 CST [14494] LOG:  database system was shut down at 2018-08-06 18:31:40 CST
2018-08-06 18:31:40.498 CST [14493] LOG:  database system is ready to accept connections
 done
server started
[postgres@dev-middleware ~]$

可以通过数据库服务器的实际IP地址,访问数据库,

[postgres@dev-middleware ~]$ psql 
psql (10.4)
Type "help" for help.

postgres=# show listen_addresses ;
 listen_addresses 
------------------
 *
(1 row)

postgres=# \q
[postgres@dev-middleware ~]$ psql -h 127.0.0.1 -p 5432 -d postgres
psql (10.4)
Type "help" for help.

postgres=# show listen_addresses ;
 listen_addresses 
------------------
 *
(1 row)

postgres=# \q
[postgres@dev-middleware ~]$ psql -h 172.16.11.44 -p 5432 -d postgres -U postgres
Password for user postgres: 
psql (10.4)
Type "help" for help.

postgres=# show listen_addresses ;
 listen_addresses 
------------------
 *
(1 row)

postgres=# \q
[postgres@dev-middleware ~]$

12 删除PostgreSQL数据库软件

postgres用户可以进入到之前解压的源码所在的路径,通过执行make uninstall来卸载已经安装的数据库软件。如下:

[postgres@dev-middleware ~]$ pwd
/home/postgres
[postgres@dev-middleware ~]$ ll
总用量 26036
-rw------- 1 postgres postgres      939 8月   6 18:29 pg.log
drwxrwxr-x 6 postgres postgres     4096 8月   6 11:29 postgresql-10.4
-rw-rw-r-- 1 postgres postgres 26652442 5月   8 05:06 postgresql-10.4.tar.gz
[postgres@dev-middleware ~]$ make uninstall
make: *** 没有规则可以创建目标“uninstall”。 停止。
[postgres@dev-middleware ~]$ cd postgresql-10.4
[postgres@dev-middleware postgresql-10.4]$ make uninstall 
...
...
make[1]: 进入目录“/home/postgres/postgresql-10.4/config”
rm -f '/data/postgres/10.4/lib/pgxs/config/install-sh'
rm -f '/data/postgres/10.4/lib/pgxs/config/missing'
make[1]: 离开目录“/home/postgres/postgresql-10.4/config”
[postgres@dev-middleware postgresql-10.4]$

13 重新编译安装PostgreSQL数据库软件

可以通过前面第4步的,重新解压源码,用新解压出来的源码文件,进行configure,make world,make install-world。

还有一种方式,就是把之前第5步骤执行configure之后的文件,恢复到configure之前的状态,然后通过执行configure,make world,make install-world。这个命令是make distclean。

如下是第二种方式的记录:

[postgres@dev-middleware postgresql-10.4]$ pwd
/home/postgres/postgresql-10.4
[postgres@dev-middleware postgresql-10.4]$ ll
总用量 1088
-rw-r--r--  1 postgres postgres    457 5月   8 04:51 aclocal.m4
drwxrwxr-x  2 postgres postgres   4096 5月   8 05:03 config
-rw-rw-r--  1 postgres postgres 373348 8月   6 11:29 config.log
-rwxrwxr-x  1 postgres postgres  39415 8月   6 11:29 config.status
-rwxr-xr-x  1 postgres postgres 498962 5月   8 04:51 configure
-rw-r--r--  1 postgres postgres  76909 5月   8 04:51 configure.in
drwxrwxr-x 55 postgres postgres   4096 5月   8 05:03 contrib
-rw-r--r--  1 postgres postgres   1192 5月   8 04:51 COPYRIGHT
drwxrwxr-x  3 postgres postgres   4096 5月   8 05:03 doc
-rw-rw-r--  1 postgres postgres   3638 8月   6 11:29 GNUmakefile
-rw-r--r--  1 postgres postgres   3638 5月   8 04:51 GNUmakefile.in
-rw-r--r--  1 postgres postgres    284 5月   8 04:51 HISTORY
-rw-r--r--  1 postgres postgres  71584 5月   8 05:06 INSTALL
-rw-r--r--  1 postgres postgres   1682 5月   8 04:51 Makefile
-rw-r--r--  1 postgres postgres   1212 5月   8 04:51 README
drwxrwxr-x 16 postgres postgres   4096 8月   6 11:29 src
[postgres@dev-middleware postgresql-10.4]$ make distclean
...
...
make[1]: 离开目录“/home/postgres/postgresql-10.4/src”
rm -rf tmp_install/
rm -f config.cache config.log config.status GNUmakefile
[postgres@dev-middleware postgresql-10.4]$ ll
总用量 676
-rw-r--r--  1 postgres postgres    457 5月   8 04:51 aclocal.m4
drwxrwxr-x  2 postgres postgres   4096 5月   8 05:03 config
-rwxr-xr-x  1 postgres postgres 498962 5月   8 04:51 configure
-rw-r--r--  1 postgres postgres  76909 5月   8 04:51 configure.in
drwxrwxr-x 55 postgres postgres   4096 5月   8 05:03 contrib
-rw-r--r--  1 postgres postgres   1192 5月   8 04:51 COPYRIGHT
drwxrwxr-x  3 postgres postgres   4096 5月   8 05:03 doc
-rw-r--r--  1 postgres postgres   3638 5月   8 04:51 GNUmakefile.in
-rw-r--r--  1 postgres postgres    284 5月   8 04:51 HISTORY
-rw-r--r--  1 postgres postgres  71584 5月   8 05:06 INSTALL
-rw-r--r--  1 postgres postgres   1682 5月   8 04:51 Makefile
-rw-r--r--  1 postgres postgres   1212 5月   8 04:51 README
drwxrwxr-x 16 postgres postgres   4096 8月   6 19:10 src
[postgres@dev-middleware postgresql-10.4]$

三 小结

本文用于记录如何在Linux平台上,通过编译源码的方式,来安装配置PostgreSQL数据库,作一个简单记录。更为详细的信息可以参考官方文档

另辟蹊径的一则PostgreSQL数据库SQL优化案例

一 问题现象

早上,收到项目组诉求,业务系统的某个模块从昨天下午3:30左右出现有数据插入失败的情况。希望能从数据库端或者服务器上看看有没有异常现象,并给予支持。

登录到数据库控制台上,经过初步排查发现,看到下述现象:

 

从昨天上午10:29开始,系统频繁出现类似插入SQL,且每次执行耗时1秒。更为糟糕的是,该类型SQL一直持续到今天早上,不停的在执行。

二 分析问题

拎出其中一条SQL进行分析

INSERT
INTO t_ai_prd_history_effective_record
  (
    ID,
    create_time,
    cust_category,
    cust_id,
    cust_name,
    company_id,
    company_name,
    template_no,
    template_name,
    template_field_identifier,
    template_field_name,
    submit_value
  )
SELECT random_string (32),
  now(),
  $1,
  $2,
  $3,
  $4,
  $5,
  $6,
  $7,
  $8,
  $9,
  $10
WHERE NOT EXISTS
  (SELECT 1
  FROM t_ai_prd_history_effective_record r
  WHERE COALESCE (r.cust_category, '')           = COALESCE ($11, '')
  AND COALESCE (r.cust_id, '')                   = COALESCE ($12, '')
  AND COALESCE (r.company_id, '')                = COALESCE ($13, '')
  AND COALESCE (r.template_no, '')               = COALESCE ($14, '')
  AND COALESCE (r.template_field_identifier, '') = COALESCE ($15,'')
  AND COALESCE (r.submit_value, '')              = COALESCE ($16, '')
  )

对应的参数为:

params: $1                                   = 'MY001',
  $2                                             = 'b8168c7513014b0c9769f3d61574833d',
  $3                                             = 'WP MANAGEMENT SERVICES',
  $4                                             = '1BABS7HSS5UH01FE140A000085535171',
  $5                                             = 'KLINIK PERGIGIAN M DENTAL',
  $6                                             = 'MYBL0303',
  $7                                             = 'Expenses',
  $8                                             = 'InvoiceDate',
  $9                                             = 'InvoiceDate(发票日期)',
  $10                                            = '20170614',
  $11                                            = 'MY001',
  $12                                            = 'b8168c7513014b0c9769f3d61574833d',
  $13                                            = '1BABS7HSS5UH01FE140A000085535171',
  $14                                            = 'MYBL0303',
  $15                                            = 'InvoiceDate',
  $16                                            = '20170614'

显然,该SQL的目的是要向表t_ai_prd_history_effective_record进行有条件的插入数据,其条件是表中不存在

COALESCE (r.cust_category, '')           = COALESCE ('DZ001', '')
ai->   AND COALESCE (r.cust_id, '')                   = COALESCE ('b151ad4f86ab4ec5aee8c4cc377e9eb7', '')
ai->   AND COALESCE (r.company_id, '')                = COALESCE ('04cb580238dc49af8bfb46e00e959a1a', '')
ai->   AND COALESCE (r.template_no, '')               = COALESCE ('KJDZ0101', '')
ai->   AND COALESCE (r.template_field_identifier, '') = COALESCE ('ItemQuantity','')
ai->   AND COALESCE (r.submit_value, '')              = COALESCE ('10100$__$6080$__$$__$$__$', '');

的情况下。具体体现在SQL中的WHERE NOT EXISTS。

那么,问题就明了了,应该是出现在NOT EXISTS的子查询上。

接下来,单独看看这个子查询的执行计划:

 ai=> explain analyze SELECT 1
ai->   FROM t_ai_prd_history_effective_record r
ai->   WHERE COALESCE (r.cust_category, '')           = COALESCE ('DZ001', '')
ai->   AND COALESCE (r.cust_id, '')                   = COALESCE ('b151ad4f86ab4ec5aee8c4cc377e9eb7', '')
ai->   AND COALESCE (r.company_id, '')                = COALESCE ('04cb580238dc49af8bfb46e00e959a1a', '')
ai->   AND COALESCE (r.template_no, '')               = COALESCE ('KJDZ0101', '')
ai->   AND COALESCE (r.template_field_identifier, '') = COALESCE ('ItemQuantity','')
ai->   AND COALESCE (r.submit_value, '')              = COALESCE ('10100$__$6080$__$$__$$__$', '');
                                                                                                                                                                                                                                                      QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Seq Scan on t_ai_prd_history_effective_record r  (cost=0.00..203817.53 rows=1 width=0) (actual time=1266.116..1267.093 rows=1 loops=1)
   Filter: (((COALESCE(cust_category, ''::character varying))::text = 'DZ001'::text) AND ((COALESCE(cust_id, ''::character varying))::text = 'b151ad4f86ab4ec5aee8c4cc377e9eb7'::text) AND ((COALESCE(company_id, ''::character varying))::text = '04cb580238dc49af8bfb46e00e959a1a'::text) AND ((COALESCE(template_no, ''::character varying))::text = 'KJDZ0101'::text) AND ((COALESCE(template_field_identifier, ''::character varying))::text = 'ItemQuantity'::text) AND (COALESCE(submit_value, ''::text) = '10100$__$6080$__$$__$$__$'::text))
   Rows Removed by Filter: 3193300
 Planning time: 0.129 ms
 Execution time: 1267.133 ms
(5 rows)
ai=>

t_ai_prd_history_effective_record表数据量:

ai=> select count(*) from t_ai_prd_history_effective_record;
  count 
---------
 3193138
(1 row)
ai=>

意料之中,每次执行这个去重的不存在判断,都对这个拥有300多万条数据的t_ai_prd_history_effective_record表执行一次全表扫描,可想而知,执行效率肯定低下,每次耗时1秒多,且随着该表的数据不断增多,插入的速度将变得越来越慢。

这里,可以说是,每一条记录的插入,都将导致下一次插入变得更慢!

三 解决方案

找到了问题的症结,就要着手优化了。终极目标就是怎么可以绕过那个NOT exists的子查询?

经过和BPO部门林朝荣同学讨论,给出下述方案:

  1. 能不能对于t_ai_prd_history_effective_record表的子查询的那6个字段上建立一个唯一性索引,如果可以的话,那么我们就在插入的时候不再需要那个where条件判断去重了,而由这个唯一性索引的约束去做判断,如果插入重复数据的话,就违反约束,报错抛出异常。这样的话,看似解决了这个问题,但是,如果从业务上考虑的话,的确是存在过多的重复数据,如果报错的话,业务人员会在系统上频繁的收到报错信息,不是很友好。
  2. 既然这样的话,那问题就变的更简单了。直接粗暴的把SQL的where not exists的去重判断删掉,先一股脑儿的插入,不论是否重复,然后,写个批处理的SQL,放到晚上再统一把重复的记录删掉,问题不就解决了嘛。

给出一个删除PostgreSQL数据库表重复记录的示例:

ai=> create table t(id int primary key,name varchar(10),addr varchar(10));
CREATE TABLE
ai=> insert into t values(1,'onlyou.com','xiamen');
INSERT 0 1
ai=> insert into t values(2,'apple.com','usa');   
INSERT 0 1
ai=> insert into t values(3,'apple.com','usa');
INSERT 0 1
ai=> insert into t values(4,'google','usa');
INSERT 0 1
ai=> insert into t values(5,'google','usa');
INSERT 0 1
ai=> select * from t;
 id |    name    |  addr 
----+------------+--------
  1 | onlyou.com | xiamen
  2 | apple.com  | usa
  3 | apple.com  | usa
  4 | google     | usa
  5 | google     | usa
(5 rows)
ai=> select t1.id from t t1 where t1.ctid<>(select max(t2.ctid) from t t2 where t1.name=t2.name and t1.addr=t2.addr);
 id
----
  2
  4
(2 rows)
ai=> delete from t where id in(select t1.id from t t1 where t1.ctid<>(select max(t2.ctid) from t t2 where t1.name=t2.name and t1.addr=t2.addr));
DELETE 2
ai=> select * from t;                                                                                                                          
 id |    name    |  addr 
----+------------+--------
  1 | onlyou.com | xiamen
  3 | apple.com  | usa
  5 | google     | usa
(3 rows)
ai=>

四 小结

发现并找到慢SQL的问题所在,进行有针对性的优化改造,解决问题往往不一定难,深入思考一下多尝试。

以上,给出一个优化的思路和方向。

如何在PostgreSQL中使用partial index来优化SQL?

一 案例背景

从生产控制台上看到一条下述SQL:

格式化之后SQL语句为:

select '01' status, count(1) from t_ai_prd_item where status = '01' and deleted = false	
union all 
select '02' status, count(1) from t_ai_prd_item where status = '02' and deleted = false

一个union all的联合查询,每次执行耗时1秒。有没有优化余地呢?

 二 优化分析

下述通过copy原表t_ai_prd_item为t1,来进行分析优化。

1 看SQL执行计划及数据量分布:

ai=> explain analyze select '01' status, count(1) from t1 where status = '01' and deleted = false 
ai-> union all 
ai-> select '02' status, count(1) from t1 where status = '02' and deleted = false;
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Append  (cost=160483.13..320974.20 rows=2 width=40) (actual time=5211.374..6250.940 rows=2 loops=1)
   ->  Aggregate  (cost=160483.13..160483.14 rows=1 width=0) (actual time=5211.374..5211.374 rows=1 loops=1)
         ->  Seq Scan on t1  (cost=0.00..160483.12 rows=1 width=0) (actual time=5211.369..5211.369 rows=0 loops=1)
               Filter: ((NOT deleted) AND ((status)::text = '01'::text))
               Rows Removed by Filter: 1395189
   ->  Aggregate  (cost=160491.03..160491.04 rows=1 width=0) (actual time=1039.563..1039.563 rows=1 loops=1)
         ->  Seq Scan on t1 t1_1  (cost=0.00..160483.12 rows=3163 width=0) (actual time=901.577..1039.307 rows=2835 loops=1)
               Filter: ((NOT deleted) AND ((status)::text = '02'::text))
               Rows Removed by Filter: 1392354
 Planning time: 0.417 ms
 Execution time: 6251.024 ms
(11 rows)

ai=> select count(*) from t_ai_prd_item;                       
  count  
---------
 1395189
(1 row)

ai=> select count(*),status from t_ai_prd_item group by status;
  count  | status 
---------+--------
     364 | 04
      25 | 05
    2835 | 02
 1391965 | 06
(4 rows)

ai=>

分析发现,表1中有1395189条记录,status=’01’的记录为0,status=’02’的记录为2835条。

2 那么在status字段上建立一个btree index,效果会怎么样呢?

ai=> create index idx_status on t1(status);
CREATE INDEX
ai=> explain analyze select '01' status, count(1) from t1 where status = '01' and deleted = false 
ai-> union all 
ai-> select '02' status, count(1) from t1 where status = '02' and deleted = false;
                                                               QUERY PLAN                                                               
----------------------------------------------------------------------------------------------------------------------------------------
 Append  (cost=8.45..730.24 rows=2 width=40) (actual time=0.037..3.999 rows=2 loops=1)
   ->  Aggregate  (cost=8.45..8.46 rows=1 width=0) (actual time=0.036..0.036 rows=1 loops=1)
         ->  Index Scan using idx_status on t1  (cost=0.43..8.45 rows=1 width=0) (actual time=0.033..0.033 rows=0 loops=1)
               Index Cond: ((status)::text = '01'::text)
               Filter: (NOT deleted)
   ->  Aggregate  (cost=721.76..721.77 rows=1 width=0) (actual time=3.962..3.962 rows=1 loops=1)
         ->  Index Scan using idx_status on t1 t1_1  (cost=0.43..714.10 rows=3063 width=0) (actual time=0.029..3.673 rows=2835 loops=1)
               Index Cond: ((status)::text = '02'::text)
               Filter: (NOT deleted)
 Planning time: 0.291 ms
 Execution time: 4.067 ms
(11 rows)

ai=>

效果不错,但是结合到实际业务场景,此处只是针对status=’01’和status=’02’的情况来做统计。那么有没有更好的方法来解决这个场景呢?

3 创建1个partial index来测试

ai=> create index idx_partial on t1(status) where (status='01' or status='02');
CREATE INDEX
ai=> explain analyze select '01' status, count(1) from t1 where status = '01' and deleted = false 
ai-> union all 
ai-> select '02' status, count(1) from t1 where status = '02' and deleted = false;
                                                               QUERY PLAN                                                               
----------------------------------------------------------------------------------------------------------------------------------------
 Append  (cost=8.30..732.19 rows=2 width=40) (actual time=0.019..3.916 rows=2 loops=1)
   ->  Aggregate  (cost=8.30..8.31 rows=1 width=0) (actual time=0.018..0.018 rows=1 loops=1)
         ->  Index Scan using idx_partial on t1  (cost=0.28..8.30 rows=1 width=0) (actual time=0.015..0.015 rows=0 loops=1)
               Index Cond: ((status)::text = '01'::text)
               Filter: (NOT deleted)
   ->  Aggregate  (cost=723.85..723.86 rows=1 width=0) (actual time=3.897..3.897 rows=1 loops=1)
         ->  Index Scan using idx_partial on t1 t1_1  (cost=0.28..716.20 rows=3063 width=0) (actual time=0.030..3.599 rows=2835 loops=1)
               Index Cond: ((status)::text = '02'::text)
               Filter: (NOT deleted)
 Planning time: 0.234 ms
 Execution time: 3.992 ms
(11 rows)

ai=>

对比,发现SQL的执行效率几乎没有差别。但是:

ai=> \di+ idx_status 
                         List of relations
 Schema |    Name    | Type  | Owner | Table | Size  | Description 
--------+------------+-------+-------+-------+-------+-------------
 ai     | idx_status | index | ai    | t1    | 30 MB | 
(1 row)

ai=> \di+ idx_partial 
                         List of relations
 Schema |    Name     | Type  | Owner | Table | Size  | Description 
--------+-------------+-------+-------+-------+-------+-------------
 ai     | idx_partial | index | ai    | t1    | 80 kB | 
(1 row)

ai=>

4 小结:

在确保SQL执行效率的同时,这个partial index所占的存储空间是b-tree index的1/384,大大降低了存储空间的开销。

三 关于partial index

1 什么是partial index?

分区索引是基于表中部分数据建立的,被索引的部分数据是原数据的一个子集。这个子集数据是基于某些条件可以明确计算出来的。

2 partial index适用场景?

对于表中数据分布“不均匀”的场景,比如某些数据占据绝大多数,而其它一些数据只占少量,且要满足查询条件针是对那些小数据量的查询情形。此例正好满足这种情况,总数据量为140万左右,而状态为01和02的数据只占极少的比例,且查询是针对状态为01和02的查询。

通过partial index可以排除大量普通数据或业务上“不感兴趣”的数据。如:一个在线订单系统,可以针对那些不在经常访问的客户端IP范围之外的IP进行创建分区索引,或者针对已下单还未支付的订单进行分区索引的创建。这样,当查询那些不在常用IP范围内的订单,或者那些已下单未支付的订单信息时,可以保证查询效率。

3 partial index的优势?

由于其是针对部分数据进行索引,显然同传统b-tree索引相比,其所占磁盘空间将大大缩小。当重建维护这些分区索引时,速度也非常快。

4 参考

partial index官方文档地址:
https://www.postgresql.org/docs/9.3/static/indexes-partial.html

PostgreSQL中hash索引的小结

一 关于PostgreSQL中使用hash index的小结如下:

  1. 只适合于等值查询;
  2. 不受WAL-logged保护(pre-PG 10),如果数据库崩溃,则需要重建该类型索引;
  3. 索引上的改变不会经由streaming或file-based复制到备库;
  4. 不支持在多列上创建联合hash index;
  5. 通常,不建议使用hash index。

二 PostgreSQL中使用hash index导致的一则错误案例

 
### Error updating database.  Cause: org.postgresql.util.PSQLException: ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes
### The error may involve com.onlyou.platform.form.entity.FormDetailEntityMapper.updateByPrimaryKeySelective-Inline
### The error occurred while setting parameters
### SQL: UPDATE t_onlyou_form_detail SET form_id=?,label=?,base_type=?,is_show=?,sort=?,create_ts=?,create_user_id=?,update_ts=?,update_user_id=? WHERE (id=?)
### Cause: org.postgresql.util.PSQLException: ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes
; uncategorized SQLException for SQL []; SQL state [XX001]; error code [0]; ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes; nested exception is org.postgresql.util.PSQLException: ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes

原因是数据库配置了主备结构,并且期间执行过切换。在新的主库(原备库)上,发现了上述错误,经排查该表上有1个hash index,解决办法就是重建该hash index,或者改为其它类型的index。

三 小结

这是之前,一个项目在阿里云生产环境RDS上遇到的错误。

ERROR: could not read block 0 in file "base/82560/1801225": read only 0 of 8192 bytes;很有可能把我们指向错误的方向,误以为数据库存储是不是出问题,或者数据块上出现了错误?

PostgreSQL统计信息不准导致错误的执行计划

接项目组同事反馈,一条SQL,在beta环境足够快,生产环境就慢很多。对于这种问题,我通常都需要额外的向他们解释一会儿,比如数据量分布,统计信息,不同的执行计划等…. 总之,这种现象是一个完全正常的现象。

一 首先,那么,就直接从生产环境上看一下SQL的执行计划:

cattle=> explain analyze SELECT
cattle-> ci.*, fa.farm_name AS farmName,
cattle-> cs.cowshed_no AS cowshedNo
cattle-> FROM
cattle-> t_cattle_ca_cattleinfo AS ci
cattle-> LEFT JOIN t_cattle_crm_cust_info AS cci ON ci.owner_id = cci. ID
cattle-> LEFT JOIN t_cattle_bd_farm AS fa ON ci.farm_id = fa. ID
cattle-> LEFT JOIN t_cattle_bd_cowshed AS cs ON ci.dorm_id = cs. ID
cattle-> LEFT JOIN t_cattle_sys_company t1 ON ci.owner_id = t1. ID
cattle-> WHERE
cattle-> ci. FLAG = 1
cattle-> AND ci.corp_id = 'OAO_SYSTEMINTIAL_000000000000000'
cattle-> ORDER BY
cattle-> ci.create_time DESC
cattle-> LIMIT 10 OFFSET (1- 1);
                                                                           QUERY PLAN                                                                            
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=305987.09..305987.12 rows=10 width=766) (actual time=1527.412..1527.415 rows=10 loops=1)
   ->  Sort  (cost=305987.09..306338.21 rows=140447 width=766) (actual time=1527.411..1527.414 rows=10 loops=1)
         Sort Key: ci.create_time
         Sort Method: top-N heapsort  Memory: 35kB
         ->  Hash Left Join  (cost=82.84..302952.08 rows=140447 width=766) (actual time=1313.868..1504.013 rows=24673 loops=1)
               Hash Cond: ((ci.dorm_id)::text = (cs.id)::text)
               ->  Hash Left Join  (cost=3.81..301095.66 rows=140447 width=759) (actual time=1313.053..1492.219 rows=24673 loops=1)
                     Hash Cond: ((ci.farm_id)::text = (fa.id)::text)
                     ->  Seq Scan on t_cattle_ca_cattleinfo ci  (cost=0.00..299160.71 rows=140447 width=738) (actual time=1312.957..1474.099 rows=24673 loops=1)
                           Filter: (((flag)::text = '1'::text) AND ((corp_id)::text = 'OAO_SYSTEMINTIAL_000000000000000'::text))
                     ->  Hash  (cost=3.36..3.36 rows=36 width=54) (actual time=0.043..0.043 rows=36 loops=1)
                           Buckets: 1024  Batches: 1  Memory Usage: 4kB
                           ->  Seq Scan on t_cattle_bd_farm fa  (cost=0.00..3.36 rows=36 width=54) (actual time=0.018..0.027 rows=36 loops=1)
               ->  Hash  (cost=63.46..63.46 rows=1246 width=40) (actual time=0.796..0.796 rows=1246 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 89kB
                     ->  Seq Scan on t_cattle_bd_cowshed cs  (cost=0.00..63.46 rows=1246 width=40) (actual time=0.003..0.438 rows=1246 loops=1)
 Total runtime: 1527.576 ms
(17 rows)

cattle=>

从上,可以看到,优化器在此处对于t_cattle_ca_cattleinfo表选择的是全表扫描。且耗时大概1474毫秒左右,而整个SQL的总耗时为1527.576 ms,说明绝大部分的时间开销都花费在对于t_cattle_ca_cattleinfo表的访问上。

二  接下来,分析t_cattle_ca_cattleinfo表的数据分布信息:

cattle=> select count(*) from t_cattle_ca_cattleinfo;                                                   
 count 
-------
 24673
(1 row)

cattle=> select count(*) from t_cattle_ca_cattleinfo where corp_id = 'OAO_SYSTEMINTIAL_000000000000000';
 count 
-------
 24673
(1 row)

cattle=> select count(*) from t_cattle_ca_cattleinfo where corp_id = 'OAO_SYSTEMINTIAL_000000000000000' and FLAG = 1;
 count 
-------
 24673
(1 row)

cattle=>

结合上述,看到t_cattle_ca_cattleinfo表总计24673条记录,而且满足corp_id = ‘OAO_SYSTEMINTIAL_000000000000000’ and FLAG = 1条件的数据,就是全表的数据总量。

但是,结合上述执行计划来看,优化器认为该表有rows=140447,可是手工查询到该表当前只有24673条记录。说明,统计信息出了问题。

三 然后,手工在线收集表统计信息:

cattle=> vacuum FREEZE ANALYZE verbose t_cattle_ca_cattleinfo;
INFO:  vacuuming "cattle.t_cattle_ca_cattleinfo"
INFO:  scanned index "t_cattle_ca_cattleinfo_pkey" to remove 4763 row versions
DETAIL:  CPU 0.08s/0.12u sec elapsed 0.24 sec.
INFO:  scanned index "cattleCode_index" to remove 4763 row versions
DETAIL:  CPU 0.02s/0.04u sec elapsed 0.07 sec.
INFO:  scanned index "corpId_index" to remove 4763 row versions
DETAIL:  CPU 0.08s/0.13u sec elapsed 0.22 sec.
INFO:  scanned index "farmId_index" to remove 4763 row versions
DETAIL:  CPU 0.07s/0.12u sec elapsed 0.20 sec.
INFO:  scanned index "feedId_index" to remove 4763 row versions
DETAIL:  CPU 0.09s/0.10u sec elapsed 0.21 sec.
INFO:  scanned index "manageCode_index" to remove 4763 row versions
DETAIL:  CPU 0.00s/0.03u sec elapsed 0.04 sec.
INFO:  scanned index "rfid_index" to remove 4763 row versions
DETAIL:  CPU 0.13s/0.12u sec elapsed 0.26 sec.
INFO:  "t_cattle_ca_cattleinfo": removed 4763 row versions in 1052 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "t_cattle_ca_cattleinfo_pkey" now contains 24673 row versions in 31871 pages
DETAIL:  4763 index row versions were removed.
14490 index pages have been deleted, 14490 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "cattleCode_index" now contains 24673 row versions in 17799 pages
DETAIL:  4763 index row versions were removed.
2795 index pages have been deleted, 2795 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "corpId_index" now contains 24673 row versions in 37233 pages
DETAIL:  4763 index row versions were removed.
36745 index pages have been deleted, 36742 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "farmId_index" now contains 24673 row versions in 37134 pages
DETAIL:  4763 index row versions were removed.
36654 index pages have been deleted, 36652 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "feedId_index" now contains 24673 row versions in 37155 pages
DETAIL:  4763 index row versions were removed.
36595 index pages have been deleted, 36595 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "manageCode_index" now contains 24673 row versions in 17764 pages
DETAIL:  4762 index row versions were removed.
2744 index pages have been deleted, 2744 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "rfid_index" now contains 24673 row versions in 34752 pages
DETAIL:  4763 index row versions were removed.
19034 index pages have been deleted, 19034 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "t_cattle_ca_cattleinfo": found 753 removable, 24673 nonremovable row versions in 297054 out of 297054 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 4399722 unused item pointers.
0 pages are entirely empty.
CPU 1.17s/1.69u sec elapsed 2.92 sec.
INFO:  vacuuming "pg_toast.pg_toast_1962065"
INFO:  index "pg_toast_1962065_index" now contains 0 row versions in 1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_toast_1962065": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing "cattle.t_cattle_ca_cattleinfo"
INFO:  "t_cattle_ca_cattleinfo": scanned 30000 of 297054 pages, containing 2630 live rows and 0 dead rows; 2630 rows in sample, 24811 estimated total rows
VACUUM

四 重新查看该SQL执行计划:

cattle=> explain analyze SELECT
cattle-> ci.*, fa.farm_name AS farmName,
cattle-> cs.cowshed_no AS cowshedNo
cattle-> FROM
cattle-> t_cattle_ca_cattleinfo AS ci
cattle-> LEFT JOIN t_cattle_crm_cust_info AS cci ON ci.owner_id = cci. ID
cattle-> LEFT JOIN t_cattle_bd_farm AS fa ON ci.farm_id = fa. ID
cattle-> LEFT JOIN t_cattle_bd_cowshed AS cs ON ci.dorm_id = cs. ID
cattle-> LEFT JOIN t_cattle_sys_company t1 ON ci.owner_id = t1. ID
cattle-> WHERE
cattle-> ci. FLAG = 1
cattle-> AND ci.corp_id = 'OAO_SYSTEMINTIAL_000000000000000'
cattle-> ORDER BY
cattle-> ci.create_time DESC
cattle-> LIMIT 10 OFFSET (1- 1);  
                                                                               QUERY PLAN                                                                               
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=225802.81..225802.84 rows=10 width=766) (actual time=79.316..79.319 rows=10 loops=1)
   ->  Sort  (cost=225802.81..225864.84 rows=24811 width=766) (actual time=79.314..79.315 rows=10 loops=1)
         Sort Key: ci.create_time
         Sort Method: top-N heapsort  Memory: 35kB
         ->  Hash Left Join  (cost=149207.67..225266.65 rows=24811 width=766) (actual time=9.327..54.520 rows=24673 loops=1)
               Hash Cond: ((ci.dorm_id)::text = (cs.id)::text)
               ->  Hash Left Join  (cost=149128.63..224878.17 rows=24811 width=759) (actual time=8.693..42.217 rows=24673 loops=1)
                     Hash Cond: ((ci.farm_id)::text = (fa.id)::text)
                     ->  Bitmap Heap Scan on t_cattle_ca_cattleinfo ci  (cost=149124.82..224533.21 rows=24811 width=738) (actual time=8.649..22.418 rows=24673 loops=1)
                           Recheck Cond: ((corp_id)::text = 'OAO_SYSTEMINTIAL_000000000000000'::text)
                           Filter: ((flag)::text = '1'::text)
                           ->  Bitmap Index Scan on "corpId_index"  (cost=0.00..149118.62 rows=24811 width=0) (actual time=8.182..8.182 rows=24673 loops=1)
                                 Index Cond: ((corp_id)::text = 'OAO_SYSTEMINTIAL_000000000000000'::text)
                     ->  Hash  (cost=3.36..3.36 rows=36 width=54) (actual time=0.025..0.025 rows=36 loops=1)
                           Buckets: 1024  Batches: 1  Memory Usage: 4kB
                           ->  Seq Scan on t_cattle_bd_farm fa  (cost=0.00..3.36 rows=36 width=54) (actual time=0.005..0.011 rows=36 loops=1)
               ->  Hash  (cost=63.46..63.46 rows=1246 width=40) (actual time=0.617..0.617 rows=1246 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 89kB
                     ->  Seq Scan on t_cattle_bd_cowshed cs  (cost=0.00..63.46 rows=1246 width=40) (actual time=0.001..0.251 rows=1246 loops=1)
 Total runtime: 79.503 ms
(20 rows)

cattle=>

对比,收集统计信息之前,SQL执行耗时1500毫秒,更新统计信息之后,耗时79毫秒。
五  小结:

一则典型的数据库因为表的统计信息不准,导致优化器选择错误的执行计划的一个例子。