虚拟化容器,大数据,DBA,中间件,监控。

k8s版本升级

19 02月
作者:admin|分类:容器虚拟化
v1.13.0 --> v1.14.2

K8S 升级可以跨小版本,但是不能跨大版本升级,只能一个大版本一个大版本的升级
v1.13.0 --> v1.14.2 正确
v1.13.0 --> v1.17.0 错误



升级**master节点**的相关镜像

1、 升级 kubeadm 、kubelet、kubectl
# 查看当前集群版本
[root@test1 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
test1   Ready    master   11d   v1.13.0
test2   Ready    <none>   10d   v1.13.0
test3   Ready    <none>   10d   v1.13.0
test4   Ready    <none>   10d   v1.13.0
[root@test1 ~]#

# 查看当前 k8s 版本
[root@test1 ~]# kubeadm version
kubeadm version: &version.Info{
   Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:02:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
[root@test1 ~]#

# 查看仓库集群版本
[root@test1 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
[root@test1 ~]#

# 升级kubeadm版本及查看集群是否满足升级需求
[root@test1 ~]# yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2 kubernetes-cni-0.7.5

# 查看升级后的 kubeadm 版本
[root@test1 ~]# kubeadm version
kubeadm version: &version.Info{
   Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@test1 ~]#

2、检查集群需要升级的包
# 查看集群是否可以升级,升级后各组件的版本信息
[root@test1 ~]# kubeadm upgrade plan 1.14.2
"[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.14.2

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     4 x v1.13.0   1.14.2

Upgrade to the latest version in the v1.13 series:

# 注意,这里是将要替换的 镜像版本
COMPONENT            CURRENT   AVAILABLE
API Server           v1.13.0   1.14.2
Controller Manager   v1.13.0   1.14.2
Scheduler            v1.13.0   1.14.2
Kube Proxy           v1.13.0   1.14.2
CoreDNS              1.2.6     1.3.1
Etcd                 3.2.24    3.3.10

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply 1.14.2

_____________________________________________________________________

[root@test1 ~]#

3、根据 上面给出的提示信息,下载相应的镜像
vim download_image.sh

#!/bin/bash
# 定义镜像集合数组
images=(
    kube-apiserver:v1.14.2
    kube-controller-manager:v1.14.2
    kube-scheduler:v1.14.2
    kube-proxy:v1.14.2
    pause:3.1
    etcd:3.3.10
    coredns:1.3.1
)
# 循环从 registry.cn-hangzhou.aliyuncs.com 中下载镜像

echo '+----------------------------------------------------------------+'
for img in ${
   images[@]};
do
    # 从国内源下载镜像
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$img
    # 改变镜像名称
    docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/$img k8s.gcr.io/$img
    # 删除源始镜像
    docker rmi  registry.cn-hangzhou.aliyuncs.com/google_containers/$img
    #
    echo '+----------------------------------------------------------------+'
    echo ''
done

# 下载网络插件
# 官网地址:https://quay.io/repository/coreos/flannel?tag=latest&tab=tags
docker pull quay.io/coreos/flannel:v0.10.0-amd64
# 执行脚本 下载镜像
[root@test1 ~]# ./download_image.sh
4、升级 k8s Server

[root@test1 ~]# kubeadm upgrade apply 1.14.2

...... 此处省略

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.2". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@test1 ~]#

5、重启kubelet
systemctl daemon-reload && systemctl restart kubelet


升级其他master节点(未验证)

1、升级kubeadm程序

yum install kubeadm-1.14.2 -y 

2、升级静态pod

kubeadm upgrade node experimental-control-plane

你可以看到类似信息:

[root@node-02 ~]# kubeadm upgrade node experimental-control-plane
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.14.1"...
Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5e
Static pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698b
Static pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4a
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd
Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd
Static pod: etcd-node-02 hash: 4710a34897e7838519a1bf8fe4dccf07
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests483113569"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5e
Static pod: kube-apiserver-node-02 hash: fe1005f40c3f390280358921c3073223
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698b
Static pod: kube-controller-manager-node-02 hash: ff8be061048a4660a1fbbf72db229d0d
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4a
Static pod: kube-scheduler-node-02 hash: 959a5cdf1468825401daa8d35329351e
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!

3、更新kubelet和kubectl

yum install kubectl-1.14.2 kebulet-1.14.2 -y

4、 重启kubelet
systemctl daemon-reload && systemctl restart kubelet


升级**所有node节点**的相关镜像

1、 升级基本服务
yum install -y  kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2 kubernetes-cni-0.7.5
2、下载包
vim download_image.sh

#!/bin/bash
# 定义镜像集合数组
images=(
    kube-proxy:v1.14.2
    pause:3.1
)
# 循环从 registry.cn-hangzhou.aliyuncs.com 中下载镜像

echo '+----------------------------------------------------------------+'
for img in ${
   images[@]};
do
    # 从国内源下载镜像
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$img
    # 改变镜像名称
    docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/$img k8s.gcr.io/$img
    # 删除源始镜像
    docker rmi  registry.cn-hangzhou.aliyuncs.com/google_containers/$img
    #
    echo '+----------------------------------------------------------------+'
    echo ''
done

# 下载网络插件
# 官网地址:https://quay.io/repository/coreos/flannel?tag=latest&tab=tags
docker pull quay.io/coreos/flannel:v0.10.0-amd64
# 执行脚本 下载镜像
[root@test1 ~]# ./download_image.sh
3、重启kubelet
systemctl daemon-reload && systemctl restart kubelet



常见问题
# 查看当前 k8s 版本
[root@test1 ~]# kubectl version
Client Version: version.Info{
   Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{
   Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
[root@test1 ~]#
[root@test1 ~]#
# 检查当前 版本的 k8s 是否可以升级到 指定的版本 (1.17.0)
[root@test1 ~]# kubeadm upgrade plan 1.17.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.13.0
[upgrade/versions] WARNING: No recommended etcd for requested Kubernetes version (1.17.0)

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     4 x v1.13.0   1.17.0

Upgrade to the latest version in the v1.13 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.13.0   1.17.0
Controller Manager   v1.13.0   1.17.0
Scheduler            v1.13.0   1.17.0
Kube Proxy           v1.13.0   1.17.0
CoreDNS              1.2.6     1.2.6
Etcd                 3.2.24    N/A

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply 1.17.0

Note: Before you can perform this upgrade, you have to update kubeadm to 1.17.0.

_____________________________________________________________________

[root@test1 ~]#
[root@test1 ~]#
# 根据提示,进行升级
[root@test1 ~]# kubeadm upgrade apply 1.17.0
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.17.0"
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.13.0
[upgrade/version] FATAL: the --version argument is invalid due to these fatal errors:

        # -升级到“ v1.17.0”的指定版本太高; kubeadm一次只能升级1个次要版本
        - Specified version to upgrade to "v1.17.0" is too high; kubeadm can upgrade only 1 minor version at a time

        # -升级到“ v1.17.0”的指定版本至少比kubeadm次要版本(17> 13)高一个次要版本。不支持这种升级
        - Specified version to upgrade to "v1.17.0" is at least one minor release higher than the kubeadm minor release (17 > 13). Such an upgrade is not supported

Please fix the misalignments highlighted above and try upgrading again
[root@test1 ~]#



kubelet 版本突然升级到最高
[root@test1 ~]# kubectl get nodes
NAME    STATUS     ROLES    AGE   VERSION
test1   NotReady   master   11d   v1.18.2
test2   Ready      <none>   10d   v1.14.2
test3   Ready      <none>   10d   v1.14.2
test4   Ready      <none>   10d   v1.14.2
[root@test1 ~]#

# 删除原 kubelet
[root@test1 ~]# yum remove -y kubelet
[root@test1 ~]#
[root@test1 ~]#
# 重新安装
[root@test1 ~]# yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2
[root@test1 ~]#
[root@test1 ~]#
# 重新启动
[root@test1 ~]# systemctl start kubelet && systemctl status kubelet && systemctl enable kubelet


浏览512 评论0
返回
目录
返回
首页
如何在Linux中查找和删除空目录和文件 k8s 证书过期解决