Centos8安装kubernets1.16,k8s与kuboard后,更换网卡MAC后报错问题。
Centos8与k8s,kuboard,安装在hyper-v里面,原来的虚拟机放在机械硬盘里面,
太卡了,想迁移到SSD固态硬盘目录。于是就将虚拟机停机后,将vm所在目录全部copy
到固态硬盘目录里面,将原来的vm在hyper-v中删除,然后就悲剧了。
使用hyper-v导入新目录时,提示找不到虚拟机配置文件。
只能新建虚拟机,然后添加硬盘指向存在的硬盘文件里面。
centos8系统启动后,ip与mac自动配置好。
可是k8s与kuboard面板不正常了。
Web地址打不开,就连 nginx 测试站点也打不开,
1,查看nodes节点信息,显示正常。
[root@k8smaster ~]# kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 279d v1.16.2
[root@k8smaster ~]#
2,查看所有pods信息,悲剧了,一堆问题。
[root@k8smaster ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
example cloud-eureka-0 1/1 Running 1 17h
example db-example-655987cd8d-nn5l7 0/1 Pending 0 17h
example gateway-example-7fbc944788-49hf9 1/1 Running 1 27m
example gateway-example-7fbc944788-t49c7 0/1 Evicted 0 17h
example svc-example-66d4bfbb94-dttkw 1/1 Running 2 17h
example web-example-6dddcc9f58-7m74z 0/1 Evicted 0 17h
example web-example-6dddcc9f58-shz9g 0/1 CrashLoopBackOff 11 27m
kube-public web-nginx-86cc8f7b78-jbwxv 1/1 Running 1 17h
kube-system coredns-58cc8c89f4-jfqfn 0/1 ContainerCreating 11 279d
kube-system coredns-58cc8c89f4-wczdd 0/1 ContainerCreating 11 279d
kube-system eip-nfs-nfs-storage-6f84844c78-f5ngc 1/1 Running 13 14d
kube-system eip-nfs-nfs-storage-6f84844c78-zxxq6 0/1 Evicted 0 14d
kube-system etcd-k8smaster 1/1 Running 26 279d
kube-system kube-apiserver-k8smaster 1/1 Running 52 279d
kube-system kube-controller-manager-k8smaster 1/1 Running 63 279d
kube-system kube-proxy-ksgc6 1/1 Running 0 2m44s
kube-system kube-scheduler-k8smaster 1/1 Running 60 279d
kube-system kuboard-57cf4cb9f-9st8s 1/1 Running 2 18h
kube-system metrics-server-56b49c5f5b-lztrl 0/1 CrashLoopBackOff 13 20h
kube-system monitor-blackbox-exporter-57cd874997-mn529 0/1 ContainerCreating 1 18h
kube-system monitor-grafana-55988bb67-7kq2f 1/1 Running 1 27m
kube-system monitor-grafana-55988bb67-jb2pz 0/1 Evicted 0 18h
kube-system monitor-kube-state-metrics-84f9dd79c7-59mrz 2/2 Running 15 18h
kube-system monitor-prometheus-f695d49fb-g594w 0/1 ContainerCreating 1 18h
kube-system monitor-prometheus-node-exporter-fjf7p 1/1 Running 3 18h
3,查看 service ,端口都在,但是都无法访问。
[root@k8smaster ~]# kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 279d
example cloud-eureka ClusterIP 10.1.254.8 <none> 9200/TCP 17h
example db-example ClusterIP 10.1.53.12 <none> 3306/TCP 17h
example gateway-example ClusterIP 10.1.135.216 <none> 9201/TCP 17h
example web-example ClusterIP 10.1.109.148 <none> 80/TCP 17h
kube-public web-nginx NodePort 10.1.26.215 <none> 8001:30262/TCP 19h
kube-system kube-dns ClusterIP 10.1.0.10 <none> 53/UDP,53/TCP,9153/TCP 279d
kube-system kuboard NodePort 10.1.144.203 <none> 80:32567/TCP 278d
kube-system metrics-server ClusterIP 10.1.106.2 <none> 443/TCP 20h
kube-system monitor-blackbox-exporter ClusterIP 10.1.141.105 <none> 9115/TCP 18h
kube-system monitor-grafana ClusterIP 10.1.90.151 <none> 3000/TCP 18h
kube-system monitor-kube-state-metrics ClusterIP 10.1.209.223 <none> 8080/TCP,8081/TCP 18h
kube-system monitor-prometheus ClusterIP 10.1.110.191 <none> 9090/TCP 18h
kube-system monitor-prometheus-node-exporter ClusterIP None <none> 9100/TCP 18h
4,查看kube-system命令空间pod运行状态。
[root@k8smaster ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-jfqfn 0/1 ContainerCreating 11 279d
coredns-58cc8c89f4-wczdd 0/1 ContainerCreating 11 279d
eip-nfs-nfs-storage-6f84844c78-f5ngc 0/1 CrashLoopBackOff 12 14d
eip-nfs-nfs-storage-6f84844c78-zxxq6 0/1 Evicted 0 14d
etcd-k8smaster 1/1 Running 26 279d
kube-apiserver-k8smaster 1/1 Running 52 279d
kube-controller-manager-k8smaster 1/1 Running 63 279d
kube-proxy-sjvws 0/1 ContainerCreating 11 279d
kube-scheduler-k8smaster 1/1 Running 60 279d
kuboard-57cf4cb9f-9st8s 1/1 Running 2 18h
metrics-server-56b49c5f5b-lztrl 0/1 CrashLoopBackOff 12 20h
monitor-blackbox-exporter-57cd874997-mn529 0/1 ContainerCreating 1 18h
monitor-grafana-55988bb67-7kq2f 1/1 Running 1 24m
monitor-grafana-55988bb67-jb2pz 0/1 Evicted 0 18h
monitor-kube-state-metrics-84f9dd79c7-59mrz 1/2 CrashLoopBackOff 14 18h
monitor-prometheus-f695d49fb-g594w 0/1 ContainerCreating 1 18h
monitor-prometheus-node-exporter-fjf7p 1/1 Running 3 18h
[root@k8smaster ~]#
[root@k8smaster ~]#
这里发现 kube-proxy-sjvws容器一直containercreating状态,docke images 镜像正常的。
5,于是,我这里就删除kube-proxy 容器实例,k8s就会重建 kube-proxy实例。
[root@k8smaster ~]# kubectl delete pod kube-proxy-sjvws -n kube-system
pod "kube-proxy-sjvws" deleted
[root@k8smaster ~]#
[root@k8smaster ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-jfqfn 0/1 ContainerCreating 11 279d
coredns-58cc8c89f4-wczdd 0/1 ContainerCreating 11 279d
eip-nfs-nfs-storage-6f84844c78-f5ngc 0/1 CrashLoopBackOff 12 14d
eip-nfs-nfs-storage-6f84844c78-zxxq6 0/1 Evicted 0 14d
etcd-k8smaster 1/1 Running 26 279d
kube-apiserver-k8smaster 1/1 Running 52 279d
kube-controller-manager-k8smaster 1/1 Running 63 279d
kube-proxy-ksgc6 1/1 Running 0 4s
kube-scheduler-k8smaster 1/1 Running 60 279d
kuboard-57cf4cb9f-9st8s 1/1 Running 2 18h
metrics-server-56b49c5f5b-lztrl 0/1 CrashLoopBackOff 12 20h
monitor-blackbox-exporter-57cd874997-mn529 0/1 ContainerCreating 1 18h
monitor-grafana-55988bb67-7kq2f 1/1 Running 1 25m
monitor-grafana-55988bb67-jb2pz 0/1 Evicted 0 18h
monitor-kube-state-metrics-84f9dd79c7-59mrz 1/2 CrashLoopBackOff 14 18h
monitor-prometheus-f695d49fb-g594w 0/1 ContainerCreating 1 18h
monitor-prometheus-node-exporter-fjf7p 1/1 Running 3 18h
[root@k8smaster ~]#
6,最后检查Web站点,都可以正常打开了。
7,登陆kuboard后,查看相关pod信息,coredns,存在问题,查看状态是waiting
8,coredns容器信息提示有报错误,但是看不明白,也懒的查,直接将容器实例删除,让k8s重建。
9,最后查看所在pods,已经在正常运行。
目录 返回
首页