云服务器上搭建kubernetes集群

使用的虚机百度云BCC,centos7.5 ,2核4G

yum源百度云官方自带yum源,腾讯,阿里云自带yum源都可参考该文档

搭建k8,一共三台虚机:

1
2
3
master 192.168.0.8		master跟etcd共用一台
minion1 192.168.0.9
minion2 192.168.0.10

百度云虚机自带yum源就可以下载
三台虚机都需要安装

1
2
3
[root@master ~]# yum -y install kubernetes etcd flannel ntp
[root@minion1 ~]# yum -y install kubernetes etcd flannel ntp
[root@minion2 ~]# yum -y install kubernetes etcd flannel ntp

关闭防火墙 三台同样需要关闭 注:若是生产环境不必关闭,写firewalld rule

1
2
3
[root@master ~]# systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
[root@minion1~]# systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
[root@minion2~]# systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld

在hosts文件添加以下内容,执行一台,剩余拷贝就ok

1
2
3
4
5
6
7
[root@master ~]# vim /etc/hosts
192.168.0.8 master
192.168.0.8 etcd
192.168.0.9 minion1
192.168.0.10 minion2
[root@master ~]# scp /etc/hosts 192.168.0.9:/etc/
[root@master ~]# scp /etc/hosts 192.168.0.10:/etc/

在master上配置etcd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master ~]# vim /etc/etcd/etcd.conf 
改:2 ETCD NAME=default
为:2ETCD_NAME="etcd"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
改:9 ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
为:ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.0.8:2379"
改:20 ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
为:ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.8:2379"
注释:    advertise [aedvetarz] 做广告,做宣传;通告,通知
/etc/etcd/etcd.conf 配置文件含意如下:
ETCD_NAME="etcd"
etcd节点名称,如果etcd 集群只有一台etcd,这一项可以注释不用配置,默认名称为			default,这
个名宇后面会用到。
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
etcd 存储数据的目录
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.1.63:2379"
etcd 对外服务监听地址,一般指定2379端口,如果为0.0.0.0将会监听所有接口
ETCD_ARGS=""

需要额外添加的参数,可以自己添加,etcd 的所有参数可以通过etcd-h查看。
启动服务

1
2
3
4
5
6
7
8
9
10
[root@master ~]# systemctl start etcd
[root@master ~]# systemctl status etcd
[root@master ~]# systemctl enable etcd
[root@master ~]# netstat -anptu | grep 2379
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      2963/etcd           
tcp        0      0 192.168.0.8:2379        0.0.0.0:*               LISTEN      2963/etcd           
tcp        0      0 192.168.0.8:38866       192.168.0.8:2379        ESTABLISHED 2963/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:45804         ESTABLISHED 2963/etcd           
tcp        0      0 192.168.0.8:2379        192.168.0.8:38866       ESTABLISHED 2963/etcd           
tcp        0      0 127.0.0.1:45804         127.0.0.1:2379          ESTABLISHED 2963/etcd           

检查etcd集群列表,这里只有一台

1
2
[root@master ~]# etcdctl member list
8e9e05c52164694d: name=etcd peerURLs=http://localhost:2380 clientURLs=http://192.168.0.8:2379 isLeader=true

在master配置master
修改kubernetets配置文件

1
2
3
[root@master ~]# vim /etc/kubernetes/config 
改:KUBE_MASTER="--master=http://127.0.0.1:8080"
为:KUBE_MASTER="--master=http://192.168.0.8:8080"

修改apiserver配置文件

1
2
3
4
5
6
7
8
9
10
11
[root@master ~]# vim /etc/kubernetes/apiserver
[root@xuegod63 ~]# vim /etc/kubernetes/apiserver
改:8 KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
为:8 KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0
改:17 KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
为:KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.0.8:2379"
改23行:
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExist
s,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
为:KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit" #这里必须配置
正确

配置kube-controller-manager配置文件
默认不需要改动

1
[root@master ~]# cat /etc/kubernetes/controller-manager 

配置kube-scheduler配置文件

1
2
3
[root@master ~]# vim /etc/kubernetes/scheduler 
改:KUBE_SCHEDULER_ARGS=" "
为:KUBE_SCHEDULER_ARGS="0.0.0.0"

设置etcd网络

1
2
3
[root@master ~]# etcdctl set /k8s/network/config '{"Network":"10.255.0.0/16"}'   //用于存储flanneld网络信息
[root@master ~]# etcdctl get /k8s/network/config      //查看
{"Network":"10.255.0.0/16"}

配置flanneld服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@master ~]# vim /etc/sysconfig/flanneld 
改:4FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"
为:4FLANNEL_ETCD_ENDPOINTS="http://192.168.0.8:2379"
改:8 FLANNEL_ETCD_PREFIX="/atomic.io/network"
为:8 FLANNEL_ETCD_PREFIX="/k8s/network"
#注其中/k8s/network 与上面 etcd 中的 Network 对应
改:11 #FLANNEL_OPTIONS=""
为:11 FLANNEL_OPTIONS="--iface=eth0" #指定 通信的物理网卡
[root@master ~]# systemctl restart flanneld.service 
[root@master ~]# systemctl status flanneld.service 
[root@master ~]# cat /run/flannel/subnet.env  //查看/run/flannel/subnet.env 子网信息
FLANNEL_NETWORK=10.255.0.0/16
FLANNEL_SUBNET=10.255.6.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false

之后会有一个脚本姜subnet.env转写程一个docker的环境变量文件/run/flannel/docker
docker0的地址是由 /run/flannel/subnet.env 的FLANNEL_SUBENT参数决定的

1
2
3
4
5
[root@master ~]# cat /run/flannel/docker 
DOCKER_OPT_BIP="--bip=10.255.6.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=10.255.6.1/24 --ip-masq=true --mtu=1472"

启动master上的4个服务

1
2
3
[root@master ~]# systemctl restart kube-apiserver kube-controller-manager kube-scheduler flanneld
[root@master ~]# systemctl status kube-apiserver kube-controller-manager kube-scheduler flanneld
[root@master ~]# systemctl enable kube-apiserver kube-controller-manager kube-scheduler flanneld

到此master和etcd节点配置成功

配置minion1,采用flannel方式

1
2
3
4
5
6
7
8
[root@minion1 ~]# vim /etc/sysconfig/flanneld 
改:4FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"
为:4FLANNEL_ETCD_ENDPOINTS="http://192.168.0.8:2379"
改:8 FLANNEL_ETCD_PREFIX="/atomic.io/network"
为:8 FLANNEL_ETCD_PREFIX="/k8s/network"
#注其中/k8s/network 与上面 etcd 中的 Network 对应
改:11 #FLANNEL_OPTIONS=""
为:11 FLANNEL_OPTIONS="--iface=eth0" #指定 通信的物理网卡

配置minion1上的master地址和kube-proxy

1
2
3
[root@minion1 ~]# vim /etc/kubernetes/config 
改:KUBE_MASTER="--master=http://127.0.01:8080"
为:KUBE_MASTER="--master=http://192.168.0.8:8080"

kube-proxy默认不需要改动

1
2
[root@minion1 ~]# grep -v '^#' /etc/kubernetes/proxy 
KUBE_PROXY_ARGS=""

配置minion1 kubelet

1
2
3
4
5
6
7
8
9
10
11
[root@minion1 ~]# vim /etc/kubernetes/kubelet 
改:5 KUBELET_ADDRESS="--address=127.0.0.1"
为:5KUBELET_ADDRESS="--address=0.0.0.0" #默认只监听127.0.0.1,要改成:0.0.0.0,
因为后期要使用kubectl 远程连接到kubelet 服务上,来查看 pod 及 pod 中容器的状态。如果是 127
就无法远程连接kubelet服务。
改:11 KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
为:11 KUBELET_HOSTNAME="_-hostname-override=minion1" # minion 的主机名,设置
成和本主机机名一样,便于识别。
改:14 KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
为:14 KUBELET_API_SERVER="--api-servers=http://192.168.0.8:8080" #批定 apiserver
的地址

启动minion1服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@minion1 ~]# systemctl restart flanneld kube-proxy kubelet docker
[root@minion1 ~]# systemctl enable flanneld kube-proxy kubelet docker
[root@minion1 ~]# systemctl status flanneld kube-proxy kubelet docker
查看
[root@minion1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.255.90.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 02:42:0c:ce:9d:7d  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 10.255.90.0  netmask 255.255.0.0  destination 10.255.90.0
        inet6 fe80::6062:c456:9093:278b  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 144 (144.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

查看kube-proxy

1
2
3
4
[root@minion1 ~]# netstat -antup | grep proxy
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      3242/kube-proxy     
tcp        0      0 192.168.0.9:53976       192.168.0.8:8080        ESTABLISHED 3242/kube-proxy     
tcp        0      0 192.168.0.9:53974       192.168.0.8:8080        ESTABLISHED 3242/kube-proxy  

配置minion2,采用flannel方式,和minion1配置一样
这里 直接将minion1的的配置文件拷贝到minion2了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@minion1 ~]# scp /etc/sysconfig/flanneld 189.168.0.10:/etc/sysconfig/
再次确认拷贝正确
[root@minion2 ~]# grep -v '^#' /etc/sysconfig/flanneld 

FLANNEL_ETCD_ENDPOINTS="http://192.168.0.8:2379"

FLANNEL_ETCD_PREFIX="/k8s/network"

FLANNEL_OPTIONS="--iface=eth0"
[root@minion2 ~]# systemctl start flanneld.service 
[root@minion2 ~]# ifconfig 
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 10.255.50.0  netmask 255.255.0.0  destination 10.255.50.0
        inet6 fe80::435b:45bf:df6b:58c  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 96 (96.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

配置minion2上的地址和kube-proxy
这里也直接拷贝minion1的配置文件了

1
2
3
4
5
6
7
8
[root@minion1 ~]# scp /etc/kubernetes/config 192.168.0.10:/etc/kubernetes/
[root@minion1 ~]# scp /etc/kubernetes/proxy 192.168.0.10:/etc/kubernetes/
[root@minion2 ~]# systemctl start kube-proxy
[root@minion2 ~]# netstat -anptu | grep proxy
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      29942/kube-proxy    
tcp        0      0 192.168.0.10:59170      192.168.0.8:8080        ESTABLISHED 29942/kube-proxy    
tcp        0      0 192.168.0.10:59168      192.168.0.8:8080        ESTABLISHED 29942/kube-proxy    
tcp        0      0 192.168.0.10:59166      192.168.0.8:8080        ESTABLISHED 29942/kube-proxy    

配置minion2 kubelet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@minion1 ~]# scp /etc/kubernetes/kubelet 192.168.0.10:/etc/kubernetes/
root@192.168.0.10 password: 
kubelet                                                                                               100%  613     1.6MB/s   00:00  
[root@minion2 ~]# vim /etc/kubernetes/kubelet 
改:KUBELET_HOSTNAME="--hostname-override=minion1"
为:KUBELET_HOSTNAME="--hostname-override=minion2"
[root@minion2 ~]# systemctl start kubelet
[root@minion2 ~]# netstat -antup | grep 8080    //查看:已建立连接
tcp        0      0 192.168.0.10:59216      192.168.0.8:8080        TIME_WAIT   -                   
tcp        0      0 192.168.0.10:59168      192.168.0.8:8080        ESTABLISHED 29942/kube-proxy    
tcp        0      0 192.168.0.10:59212      192.168.0.8:8080        ESTABLISHED 32339/kubelet       
tcp        0      0 192.168.0.10:59214      192.168.0.8:8080        ESTABLISHED 32339/kubelet       
tcp        0      0 192.168.0.10:59218      192.168.0.8:8080        ESTABLISHED 32339/kubelet       
tcp        0      0 192.168.0.10:59166      192.168.0.8:8080        ESTABLISHED 29942/kube-proxy    
tcp        0      0 192.168.0.10:59210      192.168.0.8:8080        ESTABLISHED 32339/kubelet       
[root@minion2 ~]# systemctl restart flanneld kube-proxy kubelet docker
[root@minion2 ~]# systemctl enable flanneld kube-proxy kubelet docker
[root@minion2 ~]# systemctl status flanneld kube-proxy kubelet docker

ifconfig查看docker0 IP minion1和minion2是不一样的

到master主机上查看集群运行状态

1
2
3
4
[root@master ~]# kubectl get nodes
NAME      STATUS    AGE
minion1   Ready     26m
minion2   Ready     7m

到此整个集群搭建完成

总结:kubernetes 每个节点需要启动的服务和开放端口号
在本实验中kubernetes4个结点一共需要启动13个服务,开6个端口号。
详情如下:~
etcd:一共1个服务 ,通讯使用 2379 端口
启动服务
[root@master(etcd)~]#systemctl restart etcd
master:一共4个服务,通讯使用 8080端口
[root@master(etcd)–]# systemctl restart kube-apiserver kube-controller-manager
kube-scheduler flanneld
minion1:一共4个服务
kubeproxy 监控听端口号是 10249 , kubelet 监听端口 10248、10250、10255 三个端口
[root@minion1 –]# systemctl restart flanneld kube-proxy kubelet docker
minion2:一共4个服务
[root@minion2 ~]# systemctl restart flanneld kube-proxy kubelet docker