虚拟化容器,大数据,DBA,中间件,监控。

kubernetes二进制集群部署(三主三从)

22 03月
作者:admin|分类:容器虚拟化

kubernetes二进制集群部署(三主三从)

一、关于kubernetes

1.1.kubernetes简介

• Kubernetes是Google在2014年开源的一个容器集群管理系统,Kubernetes简称K8S。

• K8S用于容器化应用程序的部署,扩展和管理。

• K8S提供了容器编排,资源调度,弹性伸缩,部署管理,服务发现等一系列功能。

• Kubernetes目标是让部署容器化应用简单高

1.2.kubernetes特性

  • 自我修复

在节点故障时重新启动失败的容器,替换和重新部署,保证预期的副本数量;杀死健康检查失败的容器,并且在未准备好之前不会处理客户端请求,确保线上服务不中断。

  • 弹性伸缩

使用命令、UI或者基于CPU使用情况自动快速扩容和缩容应用程序实例,保证应用业务高峰并发时的高可用性;业务低峰时回收资源,以最小成本运行服务。

  • 自动部署和回滚

K8S采用滚动更新策略更新应用,一次更新一个Pod,而不是同时删除所有Pod,如果更新过程中出现问题,将回滚更改,确保升级不受影响业务。

  • 服务发现和负载均衡

K8S为多个容器提供一个统一访问入口(内部IP地址和一个DNS名称),并且负载均衡关联的所有容器,使得用户无需考虑容器IP问题。

  • 机密和配置管理

管理机密数据和应用程序配置,而不需要把敏感数据暴露在镜像里,提高敏感数据安全性。并可以将一些常用的配置存储在K8S中,方便应用程序使用。

  • 存储编排

挂载外部存储系统,无论是来自本地存储,公有云(如AWS),还是网络存储(如NFS、GlusterFS、Ceph)都作为集群资源的一部分使用,极大提高存储使用灵活性。

  • 批处理

提供一次性任务,定时任务;满足批量数据处理和分析的场

1.3.kubernetes组件

Master组件

  • kube-apiserver

Kubernetes API,集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。

  • kube-controller-manager

处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。

  • kube-scheduler

根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上。

  • etcd

分布式键值存储系统。用于保存集群状态数据,比如Pod、Service等对象信

Node组件

  • kubelet

kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。

  • kube-proxy
在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。
  • docker或rocket

容器引擎,运行容

1.4.kubernetes核心

  • POD
最小部署单元  一组容器的集合  一个Pod中的容器共享网络命名空间  Pod是短暂的
  • Controllers
• ReplicaSet : 确保预期的Pod副本数量



• Deployment : 无状态应用部署



• StatefulSet : 有状态应用部署



• DaemonSet : 确保所有Node运行同一个Pod



• Job : 一次性任务



• Cronjob : 定时任务



更高级层次对象,部署和管理P
  • Service
• 防止Pod失联



• 定义一组Pod的访问
  • Label
标签,附加到某个资源上,用于关联对象、查询和筛选
  • Namespace
标签,附加到某个资源上,用于关联对象、查询和筛选

1.5.kubernetes部署

  • minikube

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用

部署地址:https://kubernetes.io/zh/docs/tasks/tools/

  • kubeadm

Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

部署地址:https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/

  • 二进制包

推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

下载地址:https://github.com/kubernetes/kubernetes/releases

从第二章节介绍二进制包集群安装

二、kubernetes二进制集群部署

2.1.服务器规划

角色 IP 备注(组件)
k8s-master1 172.30.3.20 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-master2 172.30.3.30 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-master3 172.30.3.40 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-worker1 172.30.3.21 kubelet,kube-proxy,docker
k8s-worker2 172.30.3.31 kubelet,kube-proxy,docker
k8s-worker3 172.30.3.41 kubelet,kube-proxy,docker
SLB 172.30.3.1 代理kube-apiserver

2.2.kubernetes架构图

image-20201224124538110.png

2.3.操作系统初始化配置<所有节点都做>

#1.关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
#2.关闭selinux
sed -ri 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config #永久
setenforce 0  # 临时
#3.关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久
sed -ri '/.*swap.*/d' /etc/fstab    #或者删除
free -m #通过此命令验证swap都变成0
#4.依据规划设置主机名并添加hosts
hostnamectl set-hostname <hostname>
echo "$(ip add show | grep -E "BROADCAST|global"| awk '{print $2}'|grep -v "ens*"|awk -F/ '{print $1}')          $(hostname -s)" >> /etc/hosts
#5.在所有master节点上添加hosts
cat >> /etc/hosts << EOF
172.30.3.20          k8s-master1
172.30.3.30          k8s-master2
172.30.3.40          k8s-master3
172.30.3.21          k8s-worker1
172.30.3.31          k8s-worker2
172.30.3.41          k8s-worker3
EOF
#6.优化内核<特别桥接的IPv4流量传递到iptables的链和nat>
cat >> /etc/sysctl.d/99-k8s.conf << EOF
#sysctls for k8s node config
net.ipv4.tcp_slow_start_after_idle=0
net.core.rmem_max=16777216
fs.inotify.max_user_watches=524288
kernel.softlockup_all_cpu_backtrace=1
kernel.softlockup_panic=1
fs.file-max=2097152
fs.inotify.max_user_instances=8192
fs.inotify.max_queued_events=16384
vm.max_map_count=262144
fs.may_detach_mounts=1
net.core.netdev_max_backlog=16384
net.ipv4.tcp_wmem=4096 12582912 16777216
net.core.wmem_max=16777216
net.core.somaxconn=32768
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=8096
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.tcp_rmem=4096 12582912 16777216
EOF
sysctl --system #生效
#7.时间同步
yum install ntpdate -y
ntpdate ntp1.aliyun.com

2.4.cfssl证书生成工具

 #1.在matster任意一台主机下载cfssl工具
 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
 mv cfssl_linux-amd64 /usr/local/bin/cfssl
 mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
 mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.4.1.生成ETCD证书

 #1.创建证书目录
 mkdir -p ~/TLS/{etcd,k8s}
 cd TLS/etcd
 #2.自签证书颁发机构(CA)
 cat > ca-config.json << EOF
 {
  "signing": {
  "default": {
  "expiry": "438000h",
  "usages": ["signing", "key encipherment", "server auth", "client auth"]
  }
  }
 }
 EOF
 ​
 cat > ca-csr.json << EOF
 {
  "CN": "etcd CA",
  "key": {
  "algo": "rsa",
  "size": 2048
  },
  "names": [
  {
  "C": "CN",
  "L": "Tianjin",
  "ST": "Tianjin"
  }
  ]
 }
 EOF
 #3.生成证书
 cfssl gencert -initca ca-csr.json | cfssljson -bare ca
 #4\. 使用自签CA签发Etcd HTTPS证书
 #创建证书申请文件:
 cat > server-csr.json << EOF
 {
  "CN": "etcd",
  "hosts": [
  "172.30.3.20",
  "172.30.3.30",
  "172.30.3.40"
  ],
  "key": {
  "algo": "rsa",
  "size": 2048
  },
  "names": [
  {
  "C": "CN",
  "L": "Tianjin",
  "ST": "Tianjin"
  }
  ]
 }
 EOF
 #注:文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
 #5.etcd https证书生成
 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json server-csr.json | cfssljson -bare server

2.4.2.生成kube-apiserver证书

 #1.自签证书颁发机构(CA)
 cd ~/TLS/k8s/
 cat > ca-config.json << EOF
 {
  "signing": {
  "default": {
  "expiry": "438000h"
  },
  "profiles": {
  "kubernetes": {
  "expiry": "438000h",
  "usages": [
  "signing",
  "key encipherment",
  "server auth",
  "client auth"
  ]
  }
  }
  }
 }
 EOF
 cat > ca-csr.json << EOF
 {
  "CN": "kubernetes",
  "key": {
  "algo": "rsa",
  "size": 2048
  },
  "names": [
  {
  "C": "CN",
  "L": "Tianjin",
  "ST": "Tianjin",
  "O": "k8s",
  "OU": "System"
  }
  ]
 }
 EOF
 #2.生成证书
 cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
 #3.使用自签CA签发kube-apiserver HTTPS证书
 cat > server-csr.json << EOF
 {
  "CN": "kubernetes",
  "hosts": [
  "10.0.0.1",
  "127.0.0.1",
  "172.30.3.1",
  "172.30.3.20",
  "172.30.3.30",
  "172.30.3.40",
  "kubernetes",
  "kubernetes.default",
  "kubernetes.default.svc",
  "kubernetes.default.svc.cluster",
  "kubernetes.default.svc.cluster.local"
  ],
  "key": {
  "algo": "rsa",
  "size": 2048
  },
  "names": [
  {
  "C": "CN",
  "L": "Tianjin",
  "ST": "Tianjin",
  "O": "k8s",
  "OU": "System"
  }
  ]
 }
 EOF
 #注释:上述文件hosts字段中IP为所有Master/LB等 IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
 #4.生成证书
 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
 #5.校验证书两种方式
 cfssl-certinfo -cert server.pem
 #或
 openssl x509  -noout -text -in server.pem
 #以上k8s证书暂时不用留到2.7.节kube-apiserver使用。

2.4.3.生成kube-proxy证书

 #1.# 切换工作目录
 cd ~/TLS/k8s
 ​
 #2\. 创建证书请求文件
 cat > kube-proxy-csr.json << EOF
 {
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
  "algo": "rsa",
  "size": 2048
  },
  "names": [
  {
  "C": "CN",
  "L": "Tianjin",
  "ST": "Tianjin",
  "O": "k8s",
  "OU": "System"
  }
  ]
 }
 EOF
 #3.生成证书
 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
 #以上证书借给2.8节部署kube-proxy使用。

2.5.在所有节点部署docker

 #1.下载地址:
 wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
 #2.解压二进制docker
 tar zxvf docker-19.03.9.tgz
 cp -r docker/* /usr/bin/
 #3.创建docker.service服务
 cat > /usr/lib/systemd/system/docker.service << EOF
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
 After=network-online.target firewalld.service
 Wants=network-online.target
 [Service]
 Type=notify
 ExecStart=/usr/bin/dockerd
 ExecReload=/bin/kill -s HUP \$MAINPID
 LimitNOFILE=infinity
 LimitNPROC=infinity
 LimitCORE=infinity
 TimeoutStartSec=0
 Delegate=yes
 KillMode=process
 Restart=on-failure
 StartLimitBurst=3
 StartLimitInterval=60s
 [Install]
 WantedBy=multi-user.target
 EOF
 ​
 #4.创建配置文件
 mkdir /etc/docker
 cat > /etc/docker/daemon.json << EOF
 {
  "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"]
 }
 EOF
 #使用ustc.edu.cn镜像加速源
 #5.启动并设置开机启动
 systemctl daemon-reload
 systemctl start docker
 systemctl enable docker
 #6.在所有其他节点执行以下操作
 scp -r 172.30.3.20:~/k8s/docker/* /usr/bin/
 scp -r 172.30.3.20:/usr/lib/systemd/system/docker.service /usr/lib/systemd/system/
 mkdir /etc/docker
 scp 172.30.3.20:/etc/docker/daemon.json /etc/docker/
 systemctl daemon-reload
 systemctl start docker
 systemctl enable docker
 #7.验证docker
 systemctl status docker

2.6.在master节点部署Etcd

以下在节点1上操作,然后将节点1生成的所有文件拷贝到节点2和节点3.

etcd3.4.9集群搭建参考:https://www.centoscn.vip/3408.html#login

 #1.下载etcd-v3.4.9-linux-amd64.tar.gz
 wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
 #2.创建工作目录并解压二进制包
 mkdir /opt/etcd/{bin,cfg,pki} -p
 tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
 mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
 #3.创建etcd配置文件
 cat > /opt/etcd/cfg/etcd.conf << EOF
 #[Member]
 ETCD_NAME="etcd-1"
 ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
 ETCD_LISTEN_PEER_URLS="https://172.30.3.20:2380"
 ETCD_LISTEN_CLIENT_URLS="https://172.30.3.20:2379"
 #[Clustering]
 ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.30.3.20:2380"
 ETCD_ADVERTISE_CLIENT_URLS="https://172.30.3.20:2379"
 ETCD_INITIAL_CLUSTER="etcd-1=https://172.30.3.20:2380,etcd-2=https://172.30.3.30:2380,etcd-3=https://172.30.3.40:2380"
 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
 ETCD_INITIAL_CLUSTER_STATE="new"
 EOF
 #4.拷贝etcd证书至/opt/etcd/pki/目录中
 cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/pki/
 #5.创建etcd系统服务
 cat > /usr/lib/systemd/system/etcd.service << EOF
 [Unit]
 Description=Etcd Server
 After=network.target
 After=network-online.target
 Wants=network-online.target
 ​
 [Service]
 Type=notify
 EnvironmentFile=/opt/etcd/cfg/etcd.conf
 ExecStart=/opt/etcd/bin/etcd \
 --cert-file=/opt/etcd/pki/server.pem \
 --key-file=/opt/etcd/pki/server-key.pem \
 --peer-cert-file=/opt/etcd/pki/server.pem \
 --peer-key-file=/opt/etcd/pki/server-key.pem \
 --trusted-ca-file=/opt/etcd/pki/ca.pem \
 --peer-trusted-ca-file=/opt/etcd/pki/ca.pem \
 --logger=zap
 Restart=on-failure
 LimitNOFILE=65536
 ​
 [Install]
 WantedBy=multi-user.target
 EOF
 #6.将master1上etcd文件分别拷贝到master2和master3上,分别在master2和master3节点上执行下列命令
 scp -r 172.30.3.20:/opt/etcd /opt/
 scp 172.30.3.20:/usr/lib/systemd/system/etcd.service /usr/lib/systemd/system/
 #7.分别在master2和master3节点修改/opt/etcd/cfg/etcd.conf
 sed -i '1,2s/etcd-1/etcd-2/g;4,8s/172.30.3.20/172.30.3.30/g' /opt/etcd/cfg/etcd.conf
 sed -i '1,2s/etcd-1/etcd-3/g;4,8s/172.30.3.20/172.30.3.40/g' /opt/etcd/cfg/etcd.conf
 #8.在所有节点上启动并设置开机启动
 systemctl daemon-reload
 systemctl start etcd
 #注:在第一台节点上执行start后会一直卡着无法返回命令提示符,这是因为在等待其他节点准备就绪,继续启动其余节点即可
 systemctl enable etcd
 #9.验证etcd启动是否成功
 systemctl status etcd|grep Active
 #在所有节点能看到如下代表成功
  Active: active (running) since 四 2020-12-24 23:17:46 CST; 29s ago
 #10.etcd集群验证,在任意节点执行如下命令
 ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/pki/ca.pem --cert=/opt/etcd/pki/server.pem --key=/opt/etcd/pki/server-key.pem --endpoints="https://172.30.3.20:2379,https://172.30.3.30:2379,https://172.30.3.40:2379" endpoint health
 #出现如下结果代表集群没有问题
 https://172.30.3.20:2379 is healthy: successfully committed proposal: took = 29.121183ms
 https://172.30.3.40:2379 is healthy: successfully committed proposal: took = 29.418321ms
 https://172.30.3.30:2379 is healthy: successfully committed proposal: took = 31.590166ms
 #如果出现问题可以通过tail /var/log/message或journalctl -u etcd查看日志

image-20201224232601473.png

etcd配置文件注解

  • ETCD_NAME:节点名称,集群中唯一
  • ETCD_DATA_DIR:数据目录
  • ETCD_LISTEN_PEER_URLS:集群通信监听地址
  • ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
  • ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
  • ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
  • ETCD_INITIAL_CLUSTER:集群节点地址
  • ETCD_INITIAL_CLUSTER_TOKEN:集群Token
  • ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

2.7.在master节点部署kubernetes

本部署下载kubernetes1.18.13版本,

下载地址:https://storage.googleapis.com/kubernetes-release/release/v1.18.13/kubernetes-server-linux-amd64.tar.gz

只需要下载server包,里面包含了Master和worker node二进制

 mkdir -p /opt/kubernetes/{bin,cfg,pki,logs} 
 tar zxvf kubernetes-server-linux-amd64.tar.gz
 cd kubernetes/server/bin
 cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
 cp kubectl /usr/bin
 #在master2和master3节点执行以下命令
 scp -r 172.30.3.20:/opt/kubernetes /opt/
 scp 172.30.3.20:/usr/bin/kubectl /usr/bin/

2.7.1.部署kube-apiserver

 #1.拷贝2.4.2章节生成kube-apiserver证书到指定目录
 cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/pki/
 #2.配置token文件(启用TLS Bootstrapping机制)
 #2.1.生成token
 head -c 16 /dev/urandom | od -An -t x | tr -d ' '
 86dea681fd5060a83619352673a21e66
 #2.2.创建token文件
 cat > /opt/kubernetes/cfg/token.csv << EOF
 86dea681fd5060a83619352673a21e66,kubelet-bootstrap,10001,"system:node-bootstrapper"
 EOF
 #格式:token,用户名,UID,用户组
 #3.创建kube-apiserver配置文件
 cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
 KUBE_APISERVER_OPTS="--logtostderr=false \\
 --v=2 \\
 --log-dir=/opt/kubernetes/logs \\
 --etcd-servers=https://172.30.3.20:2379,https://172.30.3.30:2379,https://172.30.3.40:2379 \\
 --bind-address=172.30.3.20 \\
 --secure-port=6443 \\
 --advertise-address=172.30.3.1 \\
 --allow-privileged=true \\
 --service-cluster-ip-range=10.0.0.0/24 \\
 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
 --authorization-mode=RBAC,Node \\
 --enable-bootstrap-token-auth=true \\
 --token-auth-file=/opt/kubernetes/cfg/token.csv \\
 --service-node-port-range=30000-32767 \\
 --kubelet-client-certificate=/opt/kubernetes/pki/server.pem \\
 --kubelet-client-key=/opt/kubernetes/pki/server-key.pem \\
 --tls-cert-file=/opt/kubernetes/pki/server.pem  \\
 --tls-private-key-file=/opt/kubernetes/pki/server-key.pem \\
 --client-ca-file=/opt/kubernetes/pki/ca.pem \\
 --service-account-key-file=/opt/kubernetes/pki/ca-key.pem \\
 --etcd-cafile=/opt/etcd/pki/ca.pem \\
 --etcd-certfile=/opt/etcd/pki/server.pem \\
 --etcd-keyfile=/opt/etcd/pki/server-key.pem \\
 --audit-log-maxage=30 \\
 --audit-log-maxbackup=3 \\
 --audit-log-maxsize=100 \\
 --audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
 EOF
 #注释:上面两个\\第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。
 #4.systemd管理apiserver
 cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
 [Unit]
 Description=Kubernetes API Server
 Documentation=https://github.com/kubernetes/kubernetes
 ​
 [Service]
 EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
 ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
 Restart=on-failure
 ​
 [Install]
 WantedBy=multi-user.target
 EOF
 #5.启动并设置开机启动
 systemctl daemon-reload
 systemctl start kube-apiserver
 systemctl enable kube-apiserver
 #6.授权kubelet-bootstrap用户允许请求证书
 kubectl create clusterrolebinding kubelet-bootstrap \
 --clusterrole=system:node-bootstrapper \
 --user=kubelet-bootstrap
 #7.把master1节点上内容分别拷贝到在master2和master3节点上,注意token不变,配置文件分别修改--bind-address成本机ip,--advertise-address是SLB的ip地址不变化(已经在负载配置完成)
 #在master2和master3分别执行以下语句
 scp -r 172.30.3.20:/opt/kubernetes /opt/
 sed -i '5,/bind-address/s/172.30.3.20/172.30.3.30/g'/opt/kubernetes/cfg/kube-apiserver.conf 
 sed -i '5,/bind-address/s/172.30.3.20/172.30.3.40/g'/opt/kubernetes/cfg/kube-apiserver.conf 
 #修改--bind-address成本机ip地址
 scp -r 172.30.3.20:/usr/lib/systemd/system/kube-apiserver.service /usr/lib/systemd/system/
 systemctl daemon-reload
 systemctl start kube-apiserver
 systemctl enable kube-apiserver
 #8.验证在浏览器上输入以下地址
 https://172.30.3.1:6443/version
 https://172.30.3.20:6443/version
 https://172.30.3.30:6443/version
 https://172.30.3.40:6443/version #以上地址返回kubernetes版本信息说明正常
 #或在主节点上执行
 curl -k https://172.30.3.1:6443/version
 #或者在各节点执行以下命令能看到etcd各节点为健康说明正常
 kubectl get cs

启用 TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

kube-apiserver配置文件注解:

–logtostderr:启用日志

—v:日志等级

–log-dir:日志目录

–etcd-servers:etcd集群地址

–bind-address:监听地址

–secure-port:https安全端口

–advertise-address:集群通告地址

–allow-privileged:启用授权

–service-cluster-ip-range:Service虚拟IP地址段

–enable-admission-plugins:准入控制模块

–authorization-mode:认证授权,启用RBAC授权和节点自管理

–enable-bootstrap-token-auth:启用TLS bootstrap机制

–token-auth-file:bootstrap token文件

–service-node-port-range:Service nodeport类型默认分配端口范围

–kubelet-client-xxx:apiserver访问kubelet客户端证书

–tls-xxx-file:apiserver https证书

–etcd-xxxfile:连接Etcd集群证书

–audit-log-xxx:审计日志

2.7.2.部署kube-controller-manager

 #1.创建部署文件
 cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
 KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
 --v=2 \\
 --log-dir=/opt/kubernetes/logs \\
 --leader-elect=true \\
 --master=127.0.0.1:8080 \\
 --bind-address=127.0.0.1 \\
 --allocate-node-cidrs=true \\
 --cluster-cidr=10.244.0.0/16 \\
 --service-cluster-ip-range=10.0.0.0/24 \\
 --cluster-signing-cert-file=/opt/kubernetes/pki/ca.pem \\
 --cluster-signing-key-file=/opt/kubernetes/pki/ca-key.pem  \\
 --root-ca-file=/opt/kubernetes/pki/ca.pem \\
 --service-account-private-key-file=/opt/kubernetes/pki/ca-key.pem \\
 --experimental-cluster-signing-duration=438000h0m0s"
 EOF
 #2.systemd管理controller-manager
 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
 [Unit]
 Description=Kubernetes Controller Manager
 Documentation=https://github.com/kubernetes/kubernetes
 [Service]
 EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
 ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
 Restart=on-failure
 [Install]
 WantedBy=multi-user.target
 EOF
 #3.启动并设置开机启动
 systemctl daemon-reload
 systemctl start kube-controller-manager
 systemctl enable kube-controller-manager
 #4.在master2和master3节点分别执行以下命令
 scp 172.30.3.20:/opt/kubernetes/cfg/kube-controller-manager.conf  /opt/kubernetes/cfg/
 scp 172.30.3.20:/usr/lib/systemd/system/kube-controller-manager.service /usr/lib/systemd/system/
 systemctl daemon-reload
 systemctl start kube-controller-manager
 systemctl enable kube-controller-manager
 #5.在所有节点执行命令以下能看到controller-manager状态为健康说明正常
 kubectl get cs

–master:通过本地非安全本地端口8080连接apiserver。

–leader-elect:当该组件启动多个时,自动选举(HA)

–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

2.7.3.部署kube-scheduler

 #1.创建配置文件
 cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
 KUBE_SCHEDULER_OPTS="--logtostderr=false \
 --v=2 \
 --log-dir=/opt/kubernetes/logs \
 --leader-elect \
 --master=127.0.0.1:8080 \
 --bind-address=127.0.0.1"
 EOF
 #--master:通过本地非安全本地端口8080连接apiserver
 #--leader-electl
 #2.systemd管理scheduler
 cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
 [Unit]
 Description=Kubernetes Scheduler
 Documentation=https://github.com/kubernetes/kubernetes
 [Service]
 EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
 ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
 Restart=on-failure
 [Install]
 WantedBy=multi-user.target
 EOF
 #3.启动并设置开机启动
 systemctl daemon-reload
 systemctl start kube-scheduler
 systemctl enable kube-scheduler
 #4.在master2和master3节点分别执行以下命令
 scp 172.30.3.20:/opt/kubernetes/cfg/kube-scheduler.conf /opt/kubernetes/cfg/
 scp 172.30.3.20:/usr/lib/systemd/system/kube-scheduler.service /usr/lib/systemd/system/
 systemctl daemon-reload
 systemctl start kube-scheduler
 systemctl enable kube-scheduler
 #5.在所有节点执行命令以下能看到kube-scheduler状态为健康说明正常
 kubectl get cs

2.7.4.验证master集群

在所有master节点上执行以下命令

kubectl get cs

能看到controller-manager/scheduler/etcd-{0,1,2}状态为Healthy,那么代表master节点集群正常并完成搭建,如下图

image-20201228161240431.png

2.8.在worker节点部署kubernetes

  #1.在worker1节点上创建目录
 mkdir -p /opt/kubernetes/{bin,cfg,pki,logs}
 #从master1节点上拷贝
 scp 172.30.3.20:~/k8s/kubernetes/server/bin/kubelet /opt/kubernetes/bin/
 scp 172.30.3.20:~/k8s/kubernetes/server/bin/kube-proxy /opt/kubernetes/bin/
 #2.拷贝kubernetes目录到worker2和worker3节点上
 scp -r 172.30.3.21:/opt/kubernetes /opt/

2.8.1.部署kubelet

  #1.创建配置文件
 ###--hostname-override=k8s-worker1修改worker节点主机名
 cat > /opt/kubernetes/cfg/kubelet.conf << EOF
 KUBELET_OPTS="--logtostderr=false \\
 --v=2 \\
 --log-dir=/opt/kubernetes/logs \\
 --hostname-override=k8s-worker1 \\
 --network-plugin=cni \\
 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
 --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
 --config=/opt/kubernetes/cfg/kubelet-config.yml \\
 --cert-dir=/opt/kubernetes/pki \\
 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0"
 EOF
 #2.配置参数文件
 cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
 kind: KubeletConfiguration
 apiVersion: kubelet.config.k8s.io/v1beta1
 address: 0.0.0.0
 port: 10250
 readOnlyPort: 10255
 cgroupDriver: cgroupfs
 clusterDNS:
 - 10.0.0.2
 clusterDomain: cluster.local 
 failSwapOn: false
 authentication:
  anonymous:
  enabled: false
  webhook:
  cacheTTL: 2m0s
  enabled: true
  x509:
  clientCAFile: /opt/kubernetes/pki/ca.pem 
 authorization:
  mode: Webhook
  webhook:
  cacheAuthorizedTTL: 5m0s
  cacheUnauthorizedTTL: 30s
 evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
 maxOpenFiles: 1000000
 maxPods: 110
 EOF
 #3.生成bootstrap.kubeconfig文件
 #先在master1节点创建kubeconfig.sh脚本来生成
 cat > ~/TLS/kubeconfig.sh << EOF
 # !/bin/bash
 #Author: Autumner
 #Date: 20201229
 #Contents: bootstrap 
 KUBE_APISERVER="https://172.30.3.1:6443" # apiserver IP:PORT 采用SLB ip
 TOKEN="86dea681fd5060a83619352673a21e66" # 与token.csv里保持一致
 ​
 # 生成 kubelet bootstrap kubeconfig 配置文件
 kubectl config set-cluster kubernetes \\
  --certificate-authority=/opt/kubernetes/pki/ca.pem \\
  --embed-certs=true \\
  --server=\${KUBE_APISERVER} \\
  --kubeconfig=bootstrap.kubeconfig
 kubectl config set-credentials "kubelet-bootstrap" \\
  --token=\${TOKEN} \\
  --kubeconfig=bootstrap.kubeconfig
 kubectl config set-context default \\
  --cluster=kubernetes \\
  --user="kubelet-bootstrap" \\
  --kubeconfig=bootstrap.kubeconfig
 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
 EOF
 #4.拷贝bootstrap.kubeconfig、ca.pem到worker1节点
 scp 172.30.3.20:~/TLS/bootstrap.kubeconfig /opt/kubernetes/cfg/
 scp -r 172.30.3.20:~/TLS/k8s/ca.*pem /opt/kubernetes/pki/
 #5.systemd管理kubelet
 cat > /usr/lib/systemd/system/kubelet.service << EOF
 [Unit]
 Description=Kubernetes Kubelet
 After=docker.service
 ​
 [Service]
 EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
 ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
 Restart=on-failure
 LimitNOFILE=65536
 ​
 [Install]
 WantedBy=multi-user.target
 EOF
 #6.启动并设置开机启动
 systemctl daemon-reload
 systemctl start kubelet
 systemctl enable kubelet
 #7.在worker2和worker3节点执行以下命令
 scp 172.30.3.21:/opt/kubernetes/cfg /opt/kubernetes/
 #--hostname-override=k8s-worker1修改worker节点主机名
 sed -i '4,/hostname/s/k8s-worker1/k8s-worker2/g' /opt/kubernetes/cfg/kubelet.conf
 sed -i '4,/hostname/s/k8s-worker1/k8s-worker3/g' /opt/kubernetes/cfg/kubelet.conf
 scp 172.30.3.21:/opt/kubernetes/pki/ca.pem /opt/kubernetes/pki/
 scp 172.30.3.21:/usr/lib/systemd/system/kubelet.service /usr/lib/systemd/system/
 systemctl daemon-reload
 systemctl start kubelet
 systemctl enable kubelet
 #8.验证kublet证书请求
 #在master节点执行下列命令,结果出现Pending说明节点kubelet运行正常。
 kubectl get csr

–hostname-override:显示名称,集群中唯一

–network-plugin:启用CNI

–kubeconfig:空路径,会自动生成,后面用于连接apiserver

–bootstrap-kubeconfig:首次启动向apiserver申请证书

–config:配置参数文件

–cert-dir:kubelet证书生成目录

–pod-infra-container-image:管理Pod网络容器的镜像

2.8.2.批准kubelet证书申请并加入集群

  #1.在任何master节点查看kubelet证书请求
 kubectl get csr
image-20201229114925528.png
 #2.批准申请三个worker节点
 kubectl certificate approve node-csr-DKMUkyEHKI4IUDB2d8lHCmr2c0bYJ8dW_lSkW7CS3aE
 kubectl certificate approve node-csr-HPn09iiwkE85htYTDBdv0pBkrWkiB6_ABgcB62aH4IM
 kubectl certificate approve node-csr-peYqvUWJ_G1xL0cTZouelXegZjD3Eqatd7Q1_P708_U
 #3.查看节点
 kubectl get node
 NAME          STATUS     ROLES    AGE     VERSION
 k8s-worker1   NotReady   <none>   8m37s   v1.18.13
 k8s-worker2   NotReady   <none>   43s     v1.18.13
 k8s-worker3   NotReady   <none>   50s     v1.18.13
 #注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

2.8.3.部署kube-proxy

 #1.在worker1节点创建配置文件
 cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
 KUBE_PROXY_OPTS="--logtostderr=false \\
 --v=2 \\
 --log-dir=/opt/kubernetes/logs \\
 --config=/opt/kubernetes/cfg/kube-proxy-config.yml"
 EOF
 #2.生成kube-proxy.kubeconfig配置文件
 #先在master1节点创建kube-proxy.sh脚本
 cat > ~/TLS/kube-proxy.sh << EOF
 #!/bin/bash
 #Author: Autumner
 #Date: 20201229
 #Contents: kube-proxy.kubeconfig
 KUBE_APISERVER="https://172.30.3.1:6443" # apiserver IP:PORT 采用SLB ip
 ​
 kubectl config set-cluster kubernetes \\
  --certificate-authority=/opt/kubernetes/pki/ca.pem \\
  --embed-certs=true \\
  --server=\${KUBE_APISERVER} \\
  --kubeconfig=kube-proxy.kubeconfig
 kubectl config set-credentials kube-proxy \\
  --client-certificate=./k8s/kube-proxy.pem \\
  --client-key=./k8s/kube-proxy-key.pem \\
  --embed-certs=true \\
  --kubeconfig=kube-proxy.kubeconfig
 kubectl config set-context default \\
  --cluster=kubernetes \\
  --user=kube-proxy \\
  --kubeconfig=kube-proxy.kubeconfig
 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
 EOF
 #kube-proxy.kubeconfig拷贝到配置文件指定路径
 scp 172.30.3.20:~/TLS/kube-proxy.kubeconfig /opt/kubernetes/cfg/
 #3.配置参数文件
 #####hostnameOverride: k8s-worker1修改成worker节点的主机名
 cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
 kind: KubeProxyConfiguration
 apiVersion: kubeproxy.config.k8s.io/v1alpha1
 bindAddress: 0.0.0.0
 metricsBindAddress: 0.0.0.0:10249
 clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
 hostnameOverride: k8s-worker1
 clusterCIDR: 10.0.0.0/24
 EOF
 #4.systemd管理kube-proxy
 cat > /usr/lib/systemd/system/kube-proxy.service << EOF
 [Unit]
 Description=Kubernetes Proxy
 After=network.target
 ​
 [Service]
 EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
 ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
 Restart=on-failure
 LimitNOFILE=65536
 ​
 [Install]
 WantedBy=multi-user.target
 EOF
 #5.启动并设置开机启动
 systemctl daemon-reload
 systemctl start kube-proxy
 systemctl enable kube-proxy
 #6.验证kube-proxy是否有报错
 journalctl -u kube-proxy
 #7.在worker2和worker3节点执行以下命令
 scp 172.30.3.21:/opt/kubernetes/cfg/kube-proxy*    /opt/kubernetes/cfg/
 #hostnameOverride: k8s-worker1修改成worker节点的主机名
 sed -i '7,/hostnameOverride/s/k8s-worker1/k8s-worker2/g' /opt/kubernetes/cfg/kube-proxy-config.yml
 sed -i '7,/hostnameOverride/s/k8s-worker1/k8s-worker3/g' /opt/kubernetes/cfg/kube-proxy-config.yml
 scp 172.30.3.21:/usr/lib/systemd/system/kube-proxy.service /usr/lib/systemd/system/
 systemctl daemon-reload
 systemctl start kube-proxy
 systemctl enable kube-proxy

2.9.部署CNI网络

 #1.进入worker1节点上下载CNI二进制文件
 #CNI地址:https://github.com/containernetworking/plugins/releases/
 #当前下载版本
 wget https://github.com/containernetworking/plugins/releases/download/v0.9.0/cni-plugins-linux-amd64-v0.9.0.tgz
 #创建目录并解压
 mkdir -p /opt/cni/bin
 tar xzf cni-plugins-linux-amd64-v0.9.0.tgz -C /opt/cni/bin/
 #worker2和worker3节点分别执行以下命令
 scp -r 172.30.3.21:/opt/cni /opt/
 #2.部署CNI网络
 #在master1节点下载(有些网络是无法下载的,需要翻墙)
 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 #下载完成后默认镜像地址无法访问,修改为docker hub镜像仓库。
 sed -i -r "s#quay.io/coreos/flannel:.*-amd64#dockerhub能访问到的镜像地址#g" kube-flannel.yml
 #3.在master1节点执行
 kubectl apply -f kube-flannel.yml
 #4.验证CNI网络是否部署成功
 kubectl -n kube-system get pods
 #flannel状态为RUNNING
 kubectl get node
 #所有worker节点状态处于Ready

2.10.授权apiserver访问kubelet

#1.在master1节点执行以下
 cat > apiserver-to-kubelet-rbac.yaml << EOF
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
  annotations:
  rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
  kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
 rules:
  - apiGroups:
  - ""
  resources:
  - nodes/proxy
  - nodes/stats
  - nodes/log
  - nodes/spec
  - nodes/metrics
  - pods/log
  verbs:
  - "*"
 ---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
  name: system:kube-apiserver
  namespace: ""
 roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
 subjects:
  - apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kubernetes
 EOF
 #2.运行该yaml文件
 kubectl apply -f apiserver-to-kubelet-rbac.yaml
 #3.验证yaml
 kubectl -n kube-system get clusterrole|grep system:kube-apiserver-to-kubelet
 kubectl -n kube-system get clusterrolebinding|grep system:kube-apiserver
 #以上命令有返回结果代表apiserver授权访问成功。

三、kubernetes插件部署

3.1.Dashboard部署

#1.下载Dashboard的yaml文件
#官方主页https://github.com/kubernetes/dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
#2.默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
vim recommended.yaml
...
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort   #新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001   #新增
  selector:
    k8s-app: kubernetes-dashboard

---
...

kubectl apply -f recommended.yaml
#3.验证
kubectl -n kubernetes-dashboard get pod,svc
#pod状态处于Running说明部署成功
#4.通过网页访问使用worker节点任意ip访问
https://NodeIP:30001
#5.创建service account并绑定默认cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
#6.使用输出的token登录Dashboard
https://172.30.3.31:30001
#在设置项里可以修改语言

3.2.CoreDNS部署h

#1.kubernetes与coredns版本对照表
https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md
#2.编写coredns.yaml文件(参考https://gitee.com/afei00123/k8s_file/blob/master/coredns.yaml)
vim coredns.yaml
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.6.7
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2    #dns ip
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

#3.执行部署命令
kubectl apply -f coredns.yaml
#4.查看验证
kubectl -n kube-system get pods,svc
#查看configmap
kubectl get cm coredns -n kube-system -o yaml
#编辑configmap
kubectl -n kube-system edit  cm coredns
#重新调度pod使配置生效
kubectl get pods -n kube-system | grep coredns | awk '{print $1}' | xargs kubectl -n kube-system delete pod
#5.DNS解析测试
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
/# nslookup kubernetes
/# ping kubernetes
/# nslookup 163.com
/# ping 163.com
#如下图
image-20210105113735132.png

四、kubernetes辅助工具部署

4.1.helm安装及使用

helm介绍:是查找、分享和使用软件构建 Kubernetes 的最优方式。

功能:

helm一个命令行管理工具

chart把yaml打包

release发布

4.1.1.helm安装

#1.helm下载
https://helm.sh/zh/
#2.在所有master节点上解压helm到/usr/bin/目录中
tar -xzf helm-v3.4.2-linux-amd64.tar.gz
cp linux-amd64/helm /usr/bin/
scp /usr/bin/helm 172.30.3.30:/usr/bin/
scp /usr/bin/helm 172.30.3.40:/usr/bin/

4.1.2.helm安装weave-scope实例

#helm帮助文档:https://helm.sh/zh/docs/
#1.配置helm仓库源
#开放仓库源:http://mirror.kaiyuanshe.cn/kubernetes/charts/
# http://mirror.azure.cn/kubernetes/charts/
helm repo add stable http://mirror.kaiyuanshe.cn/kubernetes/charts #每个master节点都需要配置
#2.查看仓库
helm repo list
helm repo update #更新仓库
#3.搜索weave-scope
helm search repo weave-scope
#4.安装weave-scope
helm install k8s-ui stable/weave-scope
#以下安装返回结果

WARNING: This chart is deprecated
NAME: k8s-ui
LAST DEPLOYED: Tue Jan 5 15:20:29 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
You should now be able to access the Scope frontend in your web browser, by
using kubectl port-forward:

kubectl -n default port-forward $(kubectl -n default get endpoints
k8s-ui-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040

then browsing to http://localhost:8080/.
For more details on using Weave Scope, see the Weave Scope documentation:

https://www.weave.works/docs/scope/latest/introducing/

#5.开放宿主机访问
kubectl get svc #得到svc服务
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
k8s-ui-weave-scope   NodePort    10.0.0.31    <none>        80/TCP   9m13s
kubernetes           ClusterIP   10.0.0.1     <none>        443/TCP        8d
nginx                NodePort    10.0.0.122   <none>        80:31574/TCP   98m
#6.编辑svc服务
kubectl edit svc k8s-ui-weave-scope
#把ClusterIP替换成NodePort然后保存退出
kubectl get svc #得到svc服务
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
k8s-ui-weave-scope   NodePort    10.0.0.31    <none>        80:32684/TCP   9m13s
kubernetes           ClusterIP   10.0.0.1     <none>        443/TCP        8d
nginx                NodePort    10.0.0.122   <none>        80:31574/TCP   98m
#7.浏览器访问
http://workerIP:32684

4.1.3.helm创建自定义模板

helm的chart功能在于一套yaml部署多个应用。yaml模板经常变动有

  • 资源名称
  • 镜像
  • 标签
  • 副本数
  • 端口
#1.创建多应用mychart模板
helm create mychart
#生成以下文件或目录
charts  Chart.yaml  templates  values.yaml
#重要修改templates里内容和修改values.yaml文件

编辑values.yaml文件

#
vim mychart/values.yaml 
replicas: 1
image: nginx
tag: latest

label: nginx

port: 80

编辑deployment.yaml

vim mychart/templates/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name}}-deploy  #跟后缀不准有大写字母
spec:
  replicas: {{ .Values.replicas }}
  selector:
    matchLabels:
      app: {{ .Values.label }}
  template:
    metadata:
      labels:
        app: {{ .Values.label }}
    spec:
      containers:
      - image: {{ .Values.image }}:{{ .Values.tag }}
        name: {{ .Values.image }}-{{ .Values.tag }}

编辑service.yaml

vim mychart/templates/service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}-svc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: {{ .Values.port }}
  selector:
    app: {{ .Values.label }}
  type: NodePort

#1.测试mychart模板编写是否有误
helm install nginx1 --dry-run mychart/
#或
helm lint mychart/
#2.运行mychart模板
helm install nginx1 mychart/
#3.验证是否运行成功
helm list #或 helm list --all
#或者
kubectl get pods,svc
#4.卸载mychart应用
helm uninstall nginx1
#5.给模板打包
helm package mychart/  #生成mychart-0.1.0.tgz 
#直接运行包
helm install nginx2 mychart-0.1.0.tgz
#6.版本更新(扩展副本数,变更镜像等)
#扩展副本数
helm upgrade --set replicas=2 nginx1 mychart/
kubectl get pods,svc #可以查看到新扩展副本数
#7.回滚
#首先查看历史版本
helm history nginx1
helm rollback nginx1 1  #1代表历史版本
#Rollback was a success! Happy Helming!
kubectl get pods #可以查看已经回滚1版本
#或
helm history nginx1 #历史版本

4.1.4.helm命令基本使用

helm completion #- 为指定的shell生成自动补全脚本
helm create #- 使用给定的名称创建chart
helm dependency #- 管理chart依赖
helm env #- helm客户端环境信息
helm get #- 下载命名版本的扩展信息
helm history #- 检索发布历史
helm install #- 安装chart
helm lint #- 验证chart是否存在问题
helm list #- 列举发布版本
helm package #- 将chart目录打包
helm plugin #- 安装、列举或卸载Helm插件
helm pull #- 从仓库下载chart并(可选)在本地目录中打开
helm repo #- 添加、列出、删除、更新和索引chart仓库
helm rollback #- 回滚发布到上一个版本
helm search #- helm中搜索关键字
helm show #- 显示chart信息
helm status #- 显示命名版本的状态
helm template #- 本地渲染模板
helm test #- 执行发布的测试
helm uninstall #- 卸载版本
helm upgrade #- 升级版本
helm verify #- 验证给定路径的chart已经被签名且是合法的
helm version #- 打印客户端版本信息
浏览580 评论0
返回
目录
返回
首页
Windows 10上开启路由转发及添加路由 keepalived+mycat+mysql实现高可用主从架构(集群版)