PostgreSQL数据库SQL性能优化案例二则

零 背景与说明

数据库的性能会影响整个业务系统的性能,而数据库的性能问题,90%以上的几率都是由性能低下的SQL导致的。

一个即将上线的新系统,开发同事发过来2个SQL优化的请求。

PostgreSQL数据库SQL性能优化案例二则

一 SQL一优化分析处理思路

1 格式化原始SQL:

SELECT COUNT(b.code) AS totalCount, b.status AS status	
FROM t_ai_prd_bill b 
WHERE b.status != '900' AND b.is_deleted = FALSE 
GROUP BY b.status

2  原始SQL执行计划

ai=> explain analyze SELECT COUNT(b.code) AS totalCount, b.status AS status 
ai-> FROM t_ai_prd_bill b 
ai-> WHERE b.status != '900' AND b.is_deleted = FALSE 
ai-> GROUP BY b.status;
                                                          QUERY PLAN                                                          
------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=3870.50..3870.51 rows=1 width=17) (actual time=12.660..12.661 rows=5 loops=1)
   Group Key: status
   ->  Seq Scan on t_ai_prd_bill b  (cost=0.00..3839.75 rows=6150 width=17) (actual time=0.013..11.139 rows=6153 loops=1)
         Filter: ((NOT is_deleted) AND (status <> '900'::bpchar))
         Rows Removed by Filter: 32227
 Planning time: 0.165 ms
 Execution time: 12.719 ms
(7 rows)

ai=>

3 获取t_ai_prd_bill表结构,数据量和数据分布

 
ai=> \d t_ai_prd_bill                                      
                         Table "ai.t_ai_prd_bill"
       Column       |              Type              |      Modifiers      
--------------------+--------------------------------+---------------------
 id                 | character varying(32)          | not null
 code               | character varying(12)          | 
 packet_code        | character varying(10)          | 
 status             | character(3)                   | 
 is_deleted         | boolean                        | default false
 classify_type      | character varying(1)           | 
 last_oper_time     | timestamp(6) without time zone | 
 template_no        | character varying(32)          | 
 oss_key            | character varying(255)         | 
 receive_time       | timestamp(6) without time zone | 
 is_question        | boolean                        | default false
 ocr_type           | character(1)                   | 
 cust_id            | character varying(32)          | 
 exception_category | character(1)                   | default '0'::bpchar
 exception_detail   | character varying(400)         | 
 is_suspended       | boolean                        | 
 is_urgent          | boolean                        | 
 raw_deadline       | timestamp(6) without time zone | 
 priority           | smallint                       | 
 deadline           | timestamp(6) without time zone | 
 company_id         | character varying(32)          | 
 company_name       | character varying(255)         | 
 cust_category      | character varying(32)          | 
 source             | character varying(32)          | 
 mode_code          | character varying(32)          | 
 deadline_category  | character varying(3)           | 
 location_id        | character varying(32)          | 
 cust_name          | character varying(128)         | 
 template_category  | character varying(32)          | 
 platform_id        | character varying(32)          | 
 country_name       | character varying(64)          | 
 is_sended          | boolean                        | default false
Indexes:
    "t_ai_prd_bill_pkey" PRIMARY KEY, btree (id)
    "idx_bill_code" btree (code)
    "idx_bill_create_time_status_deleted" btree (receive_time, status, is_deleted)
    "idx_bill_cust_id" btree (cust_id)
    "idx_bill_packet_code" btree (packet_code)
    "idx_bill_status" btree (status)

ai=> select status,count(*) from t_ai_prd_bill group by status;
 status | count 
--------+-------
 400    |  2511
 401    |   183
 500    |  3174
 600    |     1
 701    |   284
 900    | 32227
(6 rows)

ai=>

4 改写等价SQL

结合上述的t_ai_prd_bill表结构、index信息、以及数据量来分析其执行计划。

t_ai_prd_bill表中的status字段,表示业务状态。status=900表示已完成业务操作的票据,status=901表示理票失败的票据,且随着业务正常流转,该表中status=900的记录会越来越多,占据数据量的绝大多数,同时status=901的记录应该非常少。也就是说,这条SQL原本是想查询在业务上除理票已完成的所有其它状态票据信息,即(status != 900)。

这样的话,如果从业务端着手,考虑将status=901表示理票失败的票据,如改为status=899表示理票失败。则,SQL可以等价改造为:

SELECT COUNT(b.code) AS totalCount, b.status AS status	
FROM t_ai_prd_bill b 
WHERE b.status < '900' AND b.is_deleted = FALSE 
GROUP BY b.status

5 新SQL执行计划

ai=> explain analyze SELECT COUNT(b.code) AS totalCount, b.status AS status 
ai-> FROM t_ai_prd_bill b 
ai-> WHERE b.status < '900' AND b.is_deleted = FALSE ai-> GROUP BY b.status;
                                                                QUERY PLAN                                                                
------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=3782.19..3782.20 rows=1 width=17) (actual time=5.423..5.424 rows=5 loops=1)
   Group Key: status
   ->  Bitmap Heap Scan on t_ai_prd_bill b  (cost=247.95..3751.44 rows=6150 width=17) (actual time=0.996..3.811 rows=6153 loops=1)
         Recheck Cond: (status < '900'::bpchar) Filter: (NOT is_deleted) Heap Blocks: exact=1213 ->  Bitmap Index Scan on idx_bill_status  (cost=0.00..246.41 rows=6150 width=0) (actual time=0.825..0.825 rows=6153 loops=1)
               Index Cond: (status < '900'::bpchar) 
Planning time: 0.156 ms 
Execution time: 5.480 ms 
(10 rows) 
ai=>

 

6 小结

通过改写SQL前后的执行计划对比,执行时间从12ms降低到5ms。且,随着业务的推进,t_ai_prd_bill表中数据量越来越多,优化带来的效果也会越来越好。

这里,由于业务上将票据状态定义为’100,200,300,400,401,500,600,402,700,701,800,900,901’,每个代码表示每个不同的业务中间状态。该业务需求是想查询统计业务状态不成功的所有票据,程序员自然想到的是通过status !=’900’来过滤数据。却忽视了数据库index里,优化器对于不等于 !=的where条件,走不到index。

 

二 SQL二优化分析处理思路

1 原始SQL

select a.status, sum(case when b.status is null then 0 else 1 end),
        sum(case when b.cust_category = 'DZ001' then 1 else 0 end) as "zyd"
        , sum(case when b.cust_category = 'ZEJ001' then 1 else 0 end) as "zyj"
        , sum(case when b.cust_category = 'MY001' then 1 else 0 end) as "my"
        from (select regexp_split_to_table('100,200,300,400,401,500,600,402,700,701,800,901', ',') as status) a
        left join t_ai_prd_bill b on (b.status = a.status)
        WHERE b.is_deleted = FALSE
        group by a.status
        order by a.status;

2 原始SQL执行计划

ai=> explain analyze select a.status, sum(case when b.status is null then 0 else 1 end),
ai->         sum(case when b.cust_category = 'DZ001' then 1 else 0 end) as "zyd"
ai->         , sum(case when b.cust_category = 'ZEJ001' then 1 else 0 end) as "zyj"
ai->         , sum(case when b.cust_category = 'MY001' then 1 else 0 end) as "my"
ai->         from (select regexp_split_to_table('100,200,300,400,401,500,600,402,700,701,800,901', ',') as status) a
ai->         left join t_ai_prd_bill b on (b.status = a.status)
ai->         WHERE b.is_deleted = FALSE
ai->         group by a.status
ai->         order by a.status;
                                                                 QUERY PLAN                                                                 
--------------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=6328.61..15390.61 rows=200 width=42) (actual time=70.104..75.647 rows=5 loops=1)
   Group Key: (regexp_split_to_table('100,200,300,400,401,500,600,402,700,701,800,901'::text, ','::text))
   ->  Merge Join  (cost=6328.61..10560.61 rows=241400 width=42) (actual time=43.422..57.495 rows=35869 loops=1)
         Merge Cond: ((regexp_split_to_table('100,200,300,400,401,500,600,402,700,701,800,901'::text, ','::text)) = ((b.status)::text))
         ->  Sort  (cost=64.84..67.34 rows=1000 width=32) (actual time=0.117..0.119 rows=12 loops=1)
               Sort Key: (regexp_split_to_table('100,200,300,400,401,500,600,402,700,701,800,901'::text, ','::text))
               Sort Method: quicksort  Memory: 25kB
               ->  Result  (cost=0.00..5.01 rows=1000 width=0) (actual time=0.094..0.102 rows=12 loops=1)
         ->  Sort  (cost=6263.78..6384.48 rows=48280 width=10) (actual time=43.293..46.852 rows=48280 loops=1)
               Sort Key: ((b.status)::text)
               Sort Method: quicksort  Memory: 3629kB
               ->  Seq Scan on t_ai_prd_bill b  (cost=0.00..2507.80 rows=48280 width=10) (actual time=0.012..30.322 rows=48280 loops=1)
                     Filter: (NOT is_deleted)
 Planning time: 0.367 ms
 Execution time: 75.737 ms
(15 rows)

ai=>

3 分析SQL及其执行计划

原始SQL是想查询统计t_ai_prd_bill表,按照status字段来分组排序。这里,绕了一个弯路,通过引入函数regexp_split_to_table来构造一个单列的虚拟表a,再使其与t_ai_prd_bill表做一次left join,没有必要。同时,从执行计划里看到有个较大的内存排序,Sort Method: quicksort Memory: 3629kB。

4 尝试改写等价SQL

select  b.status,sum(case when b.status is null then 0 else 1 end),
        sum(case when b.cust_category = 'DZ001' then 1 else 0 end) as "zyd"
        , sum(case when b.cust_category = 'ZEJ001' then 1 else 0 end) as "zyj"
        , sum(case when b.cust_category = 'MY001' then 1 else 0 end) as "my"
        from t_ai_prd_bill b 
        WHERE b.is_deleted = FALSE
        group by b.status
        order by b.status;

5 改写后的SQL执行计划

ai=> explain analyze select  b.status,sum(case when b.status is null then 0 else 1 end),
ai->         sum(case when b.cust_category = 'DZ001' then 1 else 0 end) as "zyd"
ai->         , sum(case when b.cust_category = 'ZEJ001' then 1 else 0 end) as "zyj"
ai->         , sum(case when b.cust_category = 'MY001' then 1 else 0 end) as "my"
ai->         from t_ai_prd_bill b 
ai->         WHERE b.is_deleted = FALSE
ai->         group by b.status
ai->         order by b.status;
                                                              QUERY PLAN                                                              
--------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=3473.54..3473.55 rows=6 width=10) (actual time=49.986..49.987 rows=6 loops=1)
   Sort Key: status
   Sort Method: quicksort  Memory: 25kB
   ->  HashAggregate  (cost=3473.40..3473.46 rows=6 width=10) (actual time=49.932..49.934 rows=6 loops=1)
         Group Key: status
         ->  Seq Scan on t_ai_prd_bill b  (cost=0.00..2507.80 rows=48280 width=10) (actual time=0.008..11.934 rows=48280 loops=1)
               Filter: (NOT is_deleted)
 Planning time: 0.109 ms
 Execution time: 50.060 ms
(9 rows)

ai=>

6 小结

通过分析,改写SQL,前后对比执行计划,发现减少了不必要的内存排序操作,进而让数据库执行SQL响应更快,提升效率。

 

配置scp,ssh,rsync到另外一台主机无需密码的另外一种方式

配置scp,ssh,rsync到另外一台主机无需密码的另外一种方式

一 场景需求:

一台AIX主机,一台Linux主机。
需用从AIX主机上通过oracle用户SCP传输文件到Linux主机,且要求SCP传输文件的过程中,要脚本自动化处理,无需人工干预,即免去手动键入密码的环节。

可是AIX上没有ssh-copy-id的可用命令



# ssh-copy-id   
ksh: ssh-copy-id:  not found
# uname -M
IBM,8202-E4B
# uname -n
usp720
# uname -a
AIX usp720 1 6 00F67A854C00
# oslevel 
6.1.0.0
# 

二 解决方法:

1 AIX上,Oracle用户执行,ssh-keygen -t rsa:

ssh-keygen -t rsa
执行过程中,一路Enter回车即可。

2 AIX上,Oracle用户执行,cat ~/.ssh/id_rsa.pub文件内容:

3 Linux上,Oracle用户执行,把步骤2中的文件内容,写入~/.ssh/authorized_keys:

同时,修改~/.ssh/authorized_keys的文件权限为700。

 
chmod 700~/.ssh/authorized_keys

4 AIX上,执行:

 
bash-4.3$ hostname
usp720
bash-4.3$ ssh oracle@172.18.1.12
Last login: Wed Mar 14 16:41:42 2018 from 172.18.1.1
localhost-> hostname
localhost.localdomain
localhost-> 

 

三 小记:

早在2008年的时候,在学习Oracle 10gR2 RAC在CentOS 4.8上安装部署时,配置RAC双节点,Oracle用户的互信,就采用的这种方法。10年前学的知识点被唤醒了。

在CentOS7上配置etcd的高可用集群

上一篇文章,顺利完成了在CentOS7上安装部署kubernetes。接下来,考虑在CentOS7配置部署etcd的高可用集群。

etcd clustering的官方站点上,看到有3中方式来引导etcd cluster。这里分别以静态引导和public etcd discovery这两种方式给出配置过程。假定:

  • 你已经在CentOS7上安装好etcd,如果没有,可以参考在CentOS 7上安装部署kubernetes,或者利用yum单独安装etcd;
  • 要配置etcd cluster的3台机器分别为:172.16.11.30,172.16.11.71,172.16.11.78;
  • 及其在cluster中的命名依次为:infra0,infra1,infra2;

一 采用static方式开始执行配置:

1 第1台机器172.16.11.30上执行:

[root@localhost ~]# etcd --name infra0 --initial-advertise-peer-urls http://172.16.11.30:2380 --listen-peer-urls http://172.16.11.30:2380 --listen-client-urls http://172.16.11.30:2379,http://127.0.0.1:2379 --advertise-client-urls http://172.16.11.30:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster infra0=http://172.16.11.30:2380,infra1=http://172.16.11.71:2380,infra2=http://172.16.11.78:2380 --initial-cluster-state new

2 第2台机器172.16.11.71上执行:

[root@test-zyd-jy-2 ~]# etcd --name infra1 --initial-advertise-peer-urls http://172.16.11.71:2380 --listen-peer-urls http://172.16.11.71:2380 --listen-client-urls http://172.16.11.71:2379,http://127.0.0.1:2379 --advertise-client-urls http://172.16.11.71:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster infra0=http://172.16.11.30:2380,infra1=http://172.16.11.71:2380,infra2=http://172.16.11.78:2380 --initial-cluster-state new

3 第3台机器172.16.11.78上执行:

[root@localhost ~]# etcd --name infra2 --initial-advertise-peer-urls http://172.16.11.78:2380 --listen-peer-urls http://172.16.11.78:2380 --listen-client-urls http://172.16.11.78:2379,http://127.0.0.1:2379 --advertise-client-urls http://172.16.11.78:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster infra0=http://172.16.11.30:2380,infra1=http://172.16.11.71:2380,infra2=http://172.16.11.78:2380 --initial-cluster-state new

4 任意机器上,验证cluster健康信息:

[root@localhost ~]# etcdctl cluster-health
member 2a21240027dcd247 is healthy: got healthy result from http://172.16.11.30:2379
member 784c02b20039dc5b is healthy: got healthy result from http://172.16.11.71:2379
member b1ca771041e5f776 is healthy: got healthy result from http://172.16.11.78:2379
cluster is healthy
[root@localhost ~]#

5 任意机器上,执行etcdctl set key value:

[root@localhost ~]# etcdctl set k1 100
100
[root@localhost ~]# etcdctl get k1
100
[root@localhost ~]#

6 其它机器上,执行etcdctl get key来验证:

[root@test-zyd-jy-2 ~]# etcdctl get k1
100
[root@test-zyd-jy-2 ~]# 

至此,通过静态方式,配置了一个3个节点的etcd cluster。关于上述命令中,几个参数的说明:

  • name:指位于同一个cluster中的每个etcd 节点的名字,必须唯一;
  • listen-client-urls:etcd接收客户端发起请求的监听地址;
  • advertise-client-urls:etcd cluster中的成员用于给其它成员或者客户端,发送广播的地址信息;该地址既不能配置为localhost的形式,也绝对不能留空;

 

二 采用discovery方式来配置etcd cluster

discovery方式配置etcd cluster又分为etcd discovery service和DNS discovery方式,这里采用etcd discovery service方式来配置etcd cluster。

1 创建public discovery的token

curl https://discovery.etcd.io/new?size=3

这里,只需在其中任意节点执行上述命令,并获取命令的结果。这个结果集用于整个etcd cluster的配置。

 [root@localhost ~]# curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/920771e9b700844194c13ebeff79a543[root@localhost ~]# 

2 第1台机器172.16.11.30上执行:

[root@localhost ~]# etcd --name centos-master --initial-advertise-peer-urls http://172.16.11.30:2380 --listen-peer-urls http://172.16.11.30:2380  --listen-client-urls http://127.0.0.1:4001,http://172.16.11.30:4001 --advertise-client-urls http://172.16.11.30:4001  --discovery https://discovery.etcd.io/920771e9b700844194c13ebeff79a543
2018-01-16 09:40:51.420704 I | etcdmain: etcd Version: 3.2.9
2018-01-16 09:40:51.421594 I | etcdmain: Git SHA: f1d7dd8
2018-01-16 09:40:51.421624 I | etcdmain: Go Version: go1.8.3
2018-01-16 09:40:51.421644 I | etcdmain: Go OS/Arch: linux/amd64
2018-01-16 09:40:51.421673 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2018-01-16 09:40:51.422265 W | etcdmain: no data-dir provided, using default data-dir ./centos-master.etcd
2018-01-16 09:40:51.422854 I | embed: listening for peers on http://172.16.11.30:2380
2018-01-16 09:40:51.423539 I | embed: listening for client requests on 127.0.0.1:4001
2018-01-16 09:40:51.423694 I | embed: listening for client requests on 172.16.11.30:4001
2018-01-16 09:40:53.344420 N | discovery: found self 425cf1a6e9b4c574 in the cluster
2018-01-16 09:40:53.344515 N | discovery: found 1 peer(s), waiting for 2 more
...
...

3  第1台机器172.16.11.71上执行:

[root@test-zyd-jy-2 ~]# etcd --name centos-master71 --initial-advertise-peer-urls http://172.16.11.71:2380 --listen-peer-urls http://172.16.11.71:2380  --listen-client-urls http://127.0.0.1:4001,http://172.16.11.71:4001 --advertise-client-urls http://172.16.11.71:4001  --discovery https://discovery.etcd.io/920771e9b700844194c13ebeff79a543
2018-01-16 09:42:18.030679 I | etcdmain: etcd Version: 3.2.9
2018-01-16 09:42:18.030858 I | etcdmain: Git SHA: f1d7dd8
2018-01-16 09:42:18.030872 I | etcdmain: Go Version: go1.8.3
2018-01-16 09:42:18.030884 I | etcdmain: Go OS/Arch: linux/amd64
2018-01-16 09:42:18.030897 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2018-01-16 09:42:18.030920 W | etcdmain: no data-dir provided, using default data-dir ./centos-master71.etcd
2018-01-16 09:42:18.031763 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2018-01-16 09:42:18.032013 I | embed: listening for peers on http://172.16.11.71:2380
2018-01-16 09:42:18.032364 I | embed: listening for client requests on 127.0.0.1:4001
2018-01-16 09:42:18.032463 I | embed: listening for client requests on 172.16.11.71:4001
2018-01-16 09:42:20.062578 N | discovery: found peer 425cf1a6e9b4c574 in the cluster
2018-01-16 09:42:20.062701 N | discovery: found self d91473943b320695 in the cluster
2018-01-16 09:42:20.062718 N | discovery: found 2 peer(s), waiting for 1 more

4  第1台机器172.16.11.78上执行:

[root@localhost ~]# etcd --name centos-master78 --initial-advertise-peer-urls http://172.16.11.78:2380 --listen-peer-urls http://172.16.11.78:2380  --listen-client-urls http://127.0.0.1:4001,http://172.16.11.78:4001 --advertise-client-urls http://172.16.11.78:4001  --discovery https://discovery.etcd.io/920771e9b700844194c13ebeff79a543
2018-01-16 09:43:08.553406 I | etcdmain: etcd Version: 3.2.9
2018-01-16 09:43:08.553770 I | etcdmain: Git SHA: f1d7dd8
2018-01-16 09:43:08.553785 I | etcdmain: Go Version: go1.8.3
2018-01-16 09:43:08.553799 I | etcdmain: Go OS/Arch: linux/amd64
2018-01-16 09:43:08.553812 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
2018-01-16 09:43:08.553840 W | etcdmain: no data-dir provided, using default data-dir ./centos-master78.etcd
2018-01-16 09:43:08.554396 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2018-01-16 09:43:08.554762 I | embed: listening for peers on http://172.16.11.78:2380
2018-01-16 09:43:08.554951 I | embed: listening for client requests on 127.0.0.1:4001
2018-01-16 09:43:08.555013 I | embed: listening for client requests on 172.16.11.78:4001
2018-01-16 09:43:10.345677 N | discovery: found peer 425cf1a6e9b4c574 in the cluster
2018-01-16 09:43:10.345813 N | discovery: found peer d91473943b320695 in the cluster
2018-01-16 09:43:10.345831 N | discovery: found self 98d67d53103d0e5e in the cluster
2018-01-16 09:43:10.345845 N | discovery: found 3 needed peer(s)
2018-01-16 09:43:10.346126 I | etcdserver: name = centos-master78
2018-01-16 09:43:10.346145 I | etcdserver: data dir = centos-master78.etcd
2018-01-16 09:43:10.346161 I | etcdserver: member dir = centos-master78.etcd/member
2018-01-16 09:43:10.346210 I | etcdserver: heartbeat = 100ms
2018-01-16 09:43:10.346226 I | etcdserver: election = 1000ms
2018-01-16 09:43:10.346240 I | etcdserver: snapshot count = 100000
2018-01-16 09:43:10.346286 I | etcdserver: discovery URL= https://discovery.etcd.io/920771e9b700844194c13ebeff79a543
2018-01-16 09:43:10.346322 I | etcdserver: advertise client URLs = http://172.16.11.78:4001
2018-01-16 09:43:10.346368 I | etcdserver: initial advertise peer URLs = http://172.16.11.78:2380
2018-01-16 09:43:10.346429 I | etcdserver: initial cluster = centos-master78=http://172.16.11.78:2380
2018-01-16 09:43:10.353143 I | etcdserver: starting member 98d67d53103d0e5e in cluster 57d019a9b13bd63a
2018-01-16 09:43:10.353268 I | raft: 98d67d53103d0e5e became follower at term 0

5  任意机器上,验证cluster健康信息:

[root@localhost ~]# etcdctl cluster-health
member 425cf1a6e9b4c574 is healthy: got healthy result from http://172.16.11.30:4001
member 98d67d53103d0e5e is healthy: got healthy result from http://172.16.11.78:4001
member d91473943b320695 is healthy: got healthy result from http://172.16.11.71:4001
cluster is healthy
[root@localhost ~]# 

至此,通过static和discovery service两种方式来配置了etcd cluster。

 

在CentOS 7上安装部署kubernetes

说明:

本文用于记录在CentOS 7.2上,安装配置kubernetes的过程和步骤。部署的2个机器:

IP OS level Kernel version usage
172.16.11.36  CentOS Linux release 7.2.1511 (Core)  3.10.0-327.el7.x86_64 x86_64  kubernetes master节点
 172.16.11.29  CentOS Linux release 7.2.1511 (Core)   3.10.0-327.el7.x86_64 x86_64  kubernetes node节点,或者叫worker节点

下文中,提到的master节点,指在172.16.11.36机器上操作,node节点,指在172.16.11.29机器,所有节点指这2台机器。

关于kubernetes的基础,简单说明:

master节点运行:kube-apiserver、kube-controller-manager、kube-scheduler

node节点运行:kube-proxy、kubelet

 

 

这些服务通过systemd来管理,其配置文件位于/etc/kubernets。

安装步骤:

1 所有节点:vi /etc/yum.repos.d/virt7-docker-common-release.repo

 
[root@localhost ~]# cat /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
[root@localhost ~]#

2 所有节点:Install Kubernetes, etcd and flannel on all hosts.This will also pull in docker and cadvisor.

yum -y install –enablerepo=virt7-docker-common-release kubernetes etcd flannel

该命令同样会在机器上安装docker[版本为1.12.6],如果机器上之前已经有安装其它版本docker的话,则可能会出现冲突,如下:

 
...
--> 处理 docker-ce-17.06.0.ce-1.el7.centos.x86_64 与 docker 的冲突
--> 正在使用新的信息重新解决依赖关系
--> 正在检查事务
---> 软件包 docker-ce.x86_64.0.17.06.0.ce-1.el7.centos 将被 升级
---> 软件包 docker-ce.x86_64.0.17.09.1.ce-1.el7.centos 将被 更新
--> 处理 docker-ce-17.09.1.ce-1.el7.centos.x86_64 与 docker-io 的冲突
--> 处理 docker-ce-17.09.1.ce-1.el7.centos.x86_64 与 docker 的冲突
--> 解决依赖关系完成
错误:docker-ce conflicts with 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64
 您可以尝试添加 --skip-broken 选项来解决该问题
** 发现 6 个已存在的 RPM 数据库问题, 'yum check' 输出如下:
cloog-ppl-0.15.7-1.2.el6.x86_64 有缺少的需求 libgmp.so.3()(64bit)
createrepo-0.9.9-26.el6.noarch 有缺少的需求 python(abi) = ('0', '2.6', None)
libgcj-4.4.7-17.el6.x86_64 有缺少的需求 libgmp.so.3()(64bit)
ppl-0.10.2-11.el6.x86_64 有缺少的需求 libgmp.so.3()(64bit)
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch 有缺少的需求 preupgrade-assistant >= ('0', '1.0.2', '4')
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch 有缺少的需求 python(abi) = ('0', '2.6', None)
[root@localhost ~]#

我这里的解决办法是,删除之前安装的CE版本的docker。

 
[root@localhost ~]# rpm -qa|grep docker
docker-ce-17.06.0.ce-1.el7.centos.x86_64
[root@localhost ~]# rpm -e docker-ce-17.06.0.ce-1.el7.centos.x86_64
[root@localhost ~]# docker -v
-bash: /usr/bin/docker: 没有那个文件或目录
[root@localhost ~]#

解决冲突之后,继续执行安装:

[root@localhost ~]# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.cn99.com
 * updates: mirrors.aliyun.com
正在解决依赖关系
--> 正在检查事务
---> 软件包 etcd.x86_64.0.3.2.9-3.el7 将被 安装
---> 软件包 flannel.x86_64.0.0.7.1-2.el7 将被 安装
---> 软件包 kubernetes.x86_64.0.1.5.2-0.7.git269f928.el7 将被 安装
--> 正在处理依赖关系 kubernetes-node = 1.5.2-0.7.git269f928.el7,它被软件包 kubernetes-1.5.2-0.7.git269f928.el7.x86_64 需要
--> 正在处理依赖关系 kubernetes-master = 1.5.2-0.7.git269f928.el7,它被软件包 kubernetes-1.5.2-0.7.git269f928.el7.x86_64 需要
--> 正在检查事务
---> 软件包 kubernetes-master.x86_64.0.1.5.2-0.7.git269f928.el7 将被 安装
--> 正在处理依赖关系 kubernetes-client = 1.5.2-0.7.git269f928.el7,它被软件包 kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64 需要
---> 软件包 kubernetes-node.x86_64.0.1.5.2-0.7.git269f928.el7 将被 安装
--> 正在处理依赖关系 socat,它被软件包 kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64 需要
--> 正在处理依赖关系 docker,它被软件包 kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64 需要
--> 正在处理依赖关系 conntrack-tools,它被软件包 kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64 需要
--> 正在检查事务
---> 软件包 conntrack-tools.x86_64.0.1.4.4-3.el7_3 将被 安装
--> 正在处理依赖关系 libnetfilter_conntrack >= 1.0.6,它被软件包 conntrack-tools-1.4.4-3.el7_3.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit),它被软件包 conntrack-tools-1.4.4-3.el7_3.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-3.el7_3.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-3.el7_3.x86_64 需要
--> 正在处理依赖关系 libnetfilter_queue.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-3.el7_3.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-3.el7_3.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0()(64bit),它被软件包 conntrack-tools-1.4.4-3.el7_3.x86_64 需要
---> 软件包 docker.x86_64.2.1.12.6-68.gitec8512b.el7.centos 将被 安装
--> 正在处理依赖关系 docker-common = 2:1.12.6-68.gitec8512b.el7.centos,它被软件包 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64 需要
--> 正在处理依赖关系 docker-client = 2:1.12.6-68.gitec8512b.el7.centos,它被软件包 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64 需要
---> 软件包 kubernetes-client.x86_64.0.1.5.2-0.7.git269f928.el7 将被 安装
---> 软件包 socat.x86_64.0.1.7.3.2-2.el7 将被 安装
--> 正在检查事务
---> 软件包 docker-client.x86_64.2.1.12.6-68.gitec8512b.el7.centos 将被 安装
---> 软件包 docker-common.x86_64.2.1.12.6-68.gitec8512b.el7.centos 将被 安装
--> 正在处理依赖关系 oci-umount >= 2:2.0.0-1,它被软件包 2:docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64 需要
--> 正在处理依赖关系 container-storage-setup >= 0.7.0-1,它被软件包 2:docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64 需要
--> 正在处理依赖关系 container-selinux >= 2:2.21-2,它被软件包 2:docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64 需要
---> 软件包 libnetfilter_conntrack.x86_64.0.1.0.4-2.el7 将被 升级
---> 软件包 libnetfilter_conntrack.x86_64.0.1.0.6-1.el7_3 将被 更新
---> 软件包 libnetfilter_cthelper.x86_64.0.1.0.0-9.el7 将被 安装
---> 软件包 libnetfilter_cttimeout.x86_64.0.1.0.0-6.el7 将被 安装
---> 软件包 libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 将被 安装
--> 正在检查事务
---> 软件包 container-selinux.noarch.2.2.19-2.1.el7 将被 升级
---> 软件包 container-selinux.noarch.2.2.33-1.git86f33cd.el7 将被 更新
---> 软件包 container-storage-setup.noarch.0.0.8.0-3.git1d27ecf.el7 将被 安装
--> 正在处理依赖关系 parted,它被软件包 container-storage-setup-0.8.0-3.git1d27ecf.el7.noarch 需要
---> 软件包 oci-umount.x86_64.2.2.3.0-1.git51e7c50.el7 将被 安装
--> 正在检查事务
---> 软件包 parted.x86_64.0.3.1-28.el7 将被 安装
--> 解决依赖关系完成

依赖关系解决

=================================================================================================================================
 Package                             架构               版本                                            源                  大小
=================================================================================================================================
正在安装:
 etcd                                x86_64             3.2.9-3.el7                                     extras             8.8 M
 flannel                             x86_64             0.7.1-2.el7                                     extras             6.6 M
 kubernetes                          x86_64             1.5.2-0.7.git269f928.el7                        extras              36 k
为依赖而安装:
 conntrack-tools                     x86_64             1.4.4-3.el7_3                                   base               186 k
 container-storage-setup             noarch             0.8.0-3.git1d27ecf.el7                          extras              33 k
 docker                              x86_64             2:1.12.6-68.gitec8512b.el7.centos               extras              15 M
 docker-client                       x86_64             2:1.12.6-68.gitec8512b.el7.centos               extras             3.4 M
 docker-common                       x86_64             2:1.12.6-68.gitec8512b.el7.centos               extras              82 k
 kubernetes-client                   x86_64             1.5.2-0.7.git269f928.el7                        extras              14 M
 kubernetes-master                   x86_64             1.5.2-0.7.git269f928.el7                        extras              25 M
 kubernetes-node                     x86_64             1.5.2-0.7.git269f928.el7                        extras              14 M
 libnetfilter_cthelper               x86_64             1.0.0-9.el7                                     base                18 k
 libnetfilter_cttimeout              x86_64             1.0.0-6.el7                                     base                18 k
 libnetfilter_queue                  x86_64             1.0.2-2.el7_2                                   base                23 k
 oci-umount                          x86_64             2:2.3.0-1.git51e7c50.el7                        extras              30 k
 parted                              x86_64             3.1-28.el7                                      base               607 k
 socat                               x86_64             1.7.3.2-2.el7                                   base               290 k
为依赖而更新:
 container-selinux                   noarch             2:2.33-1.git86f33cd.el7                         extras              31 k
 libnetfilter_conntrack              x86_64             1.0.6-1.el7_3                                   base                55 k

事务概要
=================================================================================================================================
安装  3 软件包 (+14 依赖软件包)
升级           (  2 依赖软件包)

总下载量:88 M
Downloading packages:
Not downloading deltainfo for extras, MD is 71 k and rpms are 31 k
No Presto metadata available for base
(1/19): container-selinux-2.33-1.git86f33cd.el7.noarch.rpm                                                |  31 kB  00:00:00     
(2/19): container-storage-setup-0.8.0-3.git1d27ecf.el7.noarch.rpm                                         |  33 kB  00:00:00     
(3/19): docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm                                          |  82 kB  00:00:00     
(4/19): conntrack-tools-1.4.4-3.el7_3.x86_64.rpm                                                          | 186 kB  00:00:00     
(5/19): kubernetes-1.5.2-0.7.git269f928.el7.x86_64.rpm                                                    |  36 kB  00:00:00     
(6/19): docker-client-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm                                          | 3.4 MB  00:00:01     
(7/19): docker-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm                                                 |  15 MB  00:00:05     
(8/19): flannel-0.7.1-2.el7.x86_64.rpm                                                                    | 6.6 MB  00:00:05     
(9/19): libnetfilter_conntrack-1.0.6-1.el7_3.x86_64.rpm                                                   |  55 kB  00:00:00     
(10/19): libnetfilter_cthelper-1.0.0-9.el7.x86_64.rpm                                                     |  18 kB  00:00:00     
(11/19): libnetfilter_cttimeout-1.0.0-6.el7.x86_64.rpm                                                    |  18 kB  00:00:00     
(12/19): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm                                                      |  23 kB  00:00:00     
(13/19): kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64.rpm                                            |  14 MB  00:00:06     
(14/19): oci-umount-2.3.0-1.git51e7c50.el7.x86_64.rpm                                                     |  30 kB  00:00:00     
(15/19): parted-3.1-28.el7.x86_64.rpm                                                                     | 607 kB  00:00:00     
(16/19): socat-1.7.3.2-2.el7.x86_64.rpm                                                                   | 290 kB  00:00:01     
(17/19): kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64.rpm                                              |  14 MB  00:00:04     
(18/19): etcd-3.2.9-3.el7.x86_64.rpm                                                                      | 8.8 MB  00:00:11     
(19/19): kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64.rpm                                            |  25 MB  00:00:12     
---------------------------------------------------------------------------------------------------------------------------------
总计                                                                                             6.2 MB/s |  88 MB  00:00:14     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
警告:RPM 数据库已被非 yum 程序修改。
** 发现 6 个已存在的 RPM 数据库问题, 'yum check' 输出如下:
cloog-ppl-0.15.7-1.2.el6.x86_64 有缺少的需求 libgmp.so.3()(64bit)
createrepo-0.9.9-26.el6.noarch 有缺少的需求 python(abi) = ('0', '2.6', None)
libgcj-4.4.7-17.el6.x86_64 有缺少的需求 libgmp.so.3()(64bit)
ppl-0.10.2-11.el6.x86_64 有缺少的需求 libgmp.so.3()(64bit)
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch 有缺少的需求 preupgrade-assistant >= ('0', '1.0.2', '4')
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch 有缺少的需求 python(abi) = ('0', '2.6', None)
  正在安装    : kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64                                                           1/21 
  正在安装    : kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64                                                           2/21 
  正在更新    : libnetfilter_conntrack-1.0.6-1.el7_3.x86_64                                                                 3/21 
  正在安装    : socat-1.7.3.2-2.el7.x86_64                                                                                  4/21 
  正在安装    : parted-3.1-28.el7.x86_64                                                                                    5/21 
  正在安装    : container-storage-setup-0.8.0-3.git1d27ecf.el7.noarch                                                       6/21 
  正在安装    : libnetfilter_cthelper-1.0.0-9.el7.x86_64                                                                    7/21 
  正在更新    : 2:container-selinux-2.33-1.git86f33cd.el7.noarch                                                            8/21 
  正在安装    : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                     9/21 
  正在安装    : 2:oci-umount-2.3.0-1.git51e7c50.el7.x86_64                                                                 10/21 
  正在安装    : 2:docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64                                                     11/21 
  正在安装    : 2:docker-client-1.12.6-68.gitec8512b.el7.centos.x86_64                                                     12/21 
  正在安装    : 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64                                                            13/21 
warning: /etc/docker/daemon.json created as /etc/docker/daemon.json.rpmnew
  正在安装    : libnetfilter_cttimeout-1.0.0-6.el7.x86_64                                                                  14/21 
  正在安装    : conntrack-tools-1.4.4-3.el7_3.x86_64                                                                       15/21 
  正在安装    : kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64                                                            16/21 
  正在安装    : kubernetes-1.5.2-0.7.git269f928.el7.x86_64                                                                 17/21 
  正在安装    : etcd-3.2.9-3.el7.x86_64                                                                                    18/21 
  正在安装    : flannel-0.7.1-2.el7.x86_64                                                                                 19/21 
  清理        : 2:container-selinux-2.19-2.1.el7.noarch                                                                    20/21 
  清理        : libnetfilter_conntrack-1.0.4-2.el7.x86_64                                                                  21/21 
  验证中      : 2:docker-client-1.12.6-68.gitec8512b.el7.centos.x86_64                                                      1/21 
  验证中      : container-storage-setup-0.8.0-3.git1d27ecf.el7.noarch                                                       2/21 
  验证中      : libnetfilter_cttimeout-1.0.0-6.el7.x86_64                                                                   3/21 
  验证中      : 2:oci-umount-2.3.0-1.git51e7c50.el7.x86_64                                                                  4/21 
  验证中      : flannel-0.7.1-2.el7.x86_64                                                                                  5/21 
  验证中      : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                     6/21 
  验证中      : 2:docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64                                                      7/21 
  验证中      : kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64                                                             8/21 
  验证中      : kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64                                                           9/21 
  验证中      : 2:container-selinux-2.33-1.git86f33cd.el7.noarch                                                           10/21 
  验证中      : kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64                                                          11/21 
  验证中      : etcd-3.2.9-3.el7.x86_64                                                                                    12/21 
  验证中      : libnetfilter_cthelper-1.0.0-9.el7.x86_64                                                                   13/21 
  验证中      : parted-3.1-28.el7.x86_64                                                                                   14/21 
  验证中      : conntrack-tools-1.4.4-3.el7_3.x86_64                                                                       15/21 
  验证中      : socat-1.7.3.2-2.el7.x86_64                                                                                 16/21 
  验证中      : libnetfilter_conntrack-1.0.6-1.el7_3.x86_64                                                                17/21 
  验证中      : 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64                                                            18/21 
  验证中      : kubernetes-1.5.2-0.7.git269f928.el7.x86_64                                                                 19/21 
  验证中      : 2:container-selinux-2.19-2.1.el7.noarch                                                                    20/21 
  验证中      : libnetfilter_conntrack-1.0.4-2.el7.x86_64                                                                  21/21 

已安装:
  etcd.x86_64 0:3.2.9-3.el7          flannel.x86_64 0:0.7.1-2.el7          kubernetes.x86_64 0:1.5.2-0.7.git269f928.el7         

作为依赖被安装:
  conntrack-tools.x86_64 0:1.4.4-3.el7_3                         container-storage-setup.noarch 0:0.8.0-3.git1d27ecf.el7        
  docker.x86_64 2:1.12.6-68.gitec8512b.el7.centos                docker-client.x86_64 2:1.12.6-68.gitec8512b.el7.centos         
  docker-common.x86_64 2:1.12.6-68.gitec8512b.el7.centos         kubernetes-client.x86_64 0:1.5.2-0.7.git269f928.el7            
  kubernetes-master.x86_64 0:1.5.2-0.7.git269f928.el7            kubernetes-node.x86_64 0:1.5.2-0.7.git269f928.el7              
  libnetfilter_cthelper.x86_64 0:1.0.0-9.el7                     libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7                    
  libnetfilter_queue.x86_64 0:1.0.2-2.el7_2                      oci-umount.x86_64 2:2.3.0-1.git51e7c50.el7                     
  parted.x86_64 0:3.1-28.el7                                     socat.x86_64 0:1.7.3.2-2.el7                                   

作为依赖被升级:
  container-selinux.noarch 2:2.33-1.git86f33cd.el7                 libnetfilter_conntrack.x86_64 0:1.0.6-1.el7_3                

完毕!
[root@localhost ~]# 
[root@localhost ~]# docker -v
Docker version 1.12.6, build ec8512b/1.12.6
[root@localhost ~]#

3 配置所有节点的hosts文件,

[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.11.36 centos-master
172.16.11.29 centos-minion-1
[root@localhost ~]#

4  所有节点 Edit /etc/kubernetes/config which will be the same on all hosts to contain

[root@localhost ~]# vi /etc/kubernetes/config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://centos-master:8080"

这里,其实,就是把该文件的最后一行从

KUBE_MASTER=”–master=http://127.0.0.1:8080″

改为:

KUBE_MASTER=”–master=http://centos-master:8080″ 。

5 关闭所有节点的SELinux和防火墙

 
setenforce 0
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

6 只修改master节点的 /etc/etcd/etcd.conf包含下列,其它不变:

 
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

7 只修改master节点的/etc/kubernetes/apiserver包含下列,其它不变:

 

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""

8 master节点上,启动ETCD服务

systemctl start etcd
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"

9 master节点修改/etc/sysconfig/flanneld

 
[root@localhost ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
 
[root@localhost ~]#

10 master节点启动服务

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

11 node节点修改/etc/kubernetes/kubelet

 
[root@dev-malay-29 ~]# cat /etc/kubernetes/kubelet 
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=centos-minion-1"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""
[root@dev-malay-29 ~]#

KUBELET_HOSTNAME=”–hostname-override=centos-minion-1″依实际情况调整。这里的值,是步骤3中指定的机器名,如果有多个node节点的话,则同样依照实际情况调整来配置。

12 node节点修改/etc/sysconfig/flanneld

 
[root@dev-malay-29 ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
 
[root@dev-malay-29 ~]#

13 node节点启动服务

 for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

14 node节点配置kubectl

 
kubectl config set-cluster default-cluster --server=http://centos-master:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context
...
[root@dev-malay-29 ~]# kubectl config set-cluster default-cluster --server=http://centos-master:8080
Cluster "default-cluster" set.
[root@dev-malay-29 ~]# kubectl config set-context default-context --cluster=default-cluster --user=default-admin
Context "default-context" set.
[root@dev-malay-29 ~]# kubectl config use-context default-context
Switched to context "default-context".
[root@dev-malay-29 ~]# 

15 验证

master、node都可以通过执行下述命令来验证:

kubectl get nodes

 
[root@localhost manifests]# kubectl get nodes
NAME              STATUS     AGE
centos-minion-1   NotReady   2s
[root@localhost manifests]# 

[root@dev-malay-29 ~]# kubectl get nodes
NAME              STATUS    AGE
centos-minion-1   Ready     12s
[root@dev-malay-29 ~]# 

16 参考引用

https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/

至此,在Centos7上完成了部署一个master节点,一个node节点的kubernetes环境。