Kubernetes – How to install a single master cluster with kubeadm?
Kubernetes – How to install a single master cluster with kubeadm?
To bootstrap a minimal K8S cluster for testing purpose, kubeadm is a very interesting tool. The goal of this blog is not to explain to you the K8S architecture or the components but only the deployment part. using kubeadm.
The final architecture of our cluster will be the following:
Source:https://elastisys.com/2018/01/25/setting-highly-available-kubernetes-clusters/
Docker Installation
The first step is to install Docker on each cluster node:
1 | yum install yum-utils device-mapper-persistent-data lvm2 |
1 2 3 4 5 | yum-config-manager \ --add-repo \ https: //download .docker.com /linux/centos/docker-ce .repo yum update && yum install docker-ce-18.06.2.ce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | mkdir /etc/docker cat > /etc/docker/daemon .json <<EOF { "exec-opts" : [ "native.cgroupdriver=systemd" ], "log-driver" : "json-file" , "log-opts" : { "max-size" : "100m" }, "storage-driver" : "overlay2" , "storage-opts" : [ "overlay2.override_kernel_check=true" ] } EOF |
1 2 3 | mkdir -p /etc/systemd/system/docker .service.d systemctl daemon-reload systemctl restart docker |
Kubeadm Installation
Before installing Kubernetes components some pre-requisites need to be configured within your cluster nodes.
Disable SWAP devices
1 2 3 4 5 | docker@docker-manager000 ~]$ cat /proc/swaps Filename Type Size Used Priority /dev/dm-1 partition 5242876 0 -1 [docker@docker-manager000 ~]$ sudo swapoff -a |
Remove any matching reference found in /etc/fstab & Reboot the system for changes taking effect.
Set SELinux in permissive mode (effectively disabling it)
1 2 3 | [docker@docker-manager000 ~]$ sudo setenforce 0 [docker@docker-manager000 ~]$ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config docker@docker-manager000 ~]$ sudo sed -i 's/^SELINUX=permissive$/SELINUX=disabled/' /etc/selinux/config |
Hostname configuration
1 2 3 | [docker@docker-manager000 ~]$ hostnamectl set - hostname docker-manager000 [docker@docker-manager000 ~]$ hostname docker-manager000 |
Network connectivity
Modify the /etc/hosts of each server as following:
1 2 3 4 5 6 7 | [docker@docker-manager000 ~]$ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost6 localhost6.localdomain6 localhost6.localdomain xx.xx.x.xx docker-manager000 xx.xx.x.xx docker-worker000 xx.xx.x.xx docker-worker001 |
Installing kubeadm, kubelet and kubectl
First, configure the Kubernetes repository for downloading the latest stable version of Kubernetes:
1 2 3 4 5 6 7 8 9 | cat < /etc/yum .repos.d /kubernetes .repo [kubernetes] name=Kubernetes baseurl=https: //packages .cloud.google.com /yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https: //packages .cloud.google.com /yum/doc/yum-key .gpg https: //packages .cloud.google.com /yum/doc/rpm-package-key .gpg EOF |
Then install packages:
1 2 | [docker@docker-manager000 ~]$ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes [docker@docker-manager000 ~]$ sudo systemctl enable --now kubelet |
Install NTP service
1 2 3 | [root@docker-manager000 containerd]$ yum install ntp [root@docker-manager000 containerd]$ systemctl start ntpd [root@docker-manager000 containerd]$ systemctl enable ntpd |
Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config for proper traffic routing
1 2 3 4 5 6 7 8 9 10 | [root@docker-manager000 ~]$ cat < /etc/sysctl .d /k8s .conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF Manually settings: [docker@docker-manager000 ~]$ sudo sysctl net.bridge.bridge-nf-call-iptables=1 [docker@docker-manager000 ~]$ sudo sysctl net.bridge.bridge-nf-call-ip6tables=1 [root@docker-manager000 ~]$ sysctl --system |
Make sure that the br_netfilter module is loaded
1 2 | [root@docker-manager000 ~]$ lsmod | grep br_netfilter (to check) [root@docker-manager000 ~]$ modprobe br_netfilter (to load) |
Initialize your cluster using Kubeadm
The kubadm command will install you all control plane components (api server + etcd + scheduler + controller-manager)
1 | [docker@docker-manager000 ~]$ sudo kubeadm init --pod-network-cidr=192.168.0.0 /16 |
Copy-paste the join command with the token and keep it for the next steps.
Install a pod network add-on: Calico
1 | kubectl apply -f https: //docs .projectcalico.org /v3 .8 /manifests/calico .yaml |
Import the kube config in your home before start using your cluster.
1 2 3 | [docker@docker-manager000 ~]$ mkdir -p $HOME/.kube [docker@docker-manager000 ~]$ sudo cp -i /etc/kubernetes/admin .conf $HOME/.kube /config [docker@docker-manager000 ~]$ sudo chown $( id -u):$( id -g) $HOME/.kube /config |
Join your cluster and run the following on Workers side
From each worker nodes, you can now execute the join command as following:
1 2 | sudo kubeadm join xx.xx.x.xx:6443 --token vv2iva.yt241t7epiu7i4iz \ --discovery-token-ca-cert- hash sha256:558ec4a25cd39185fd753aba81748569f033af9c15e48dcdf3d10206eecdf177 |
On the master, check the cluster status:
1 2 3 4 5 | [docker@docker-manager000 ~]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME docker-manager000 Ready master 20d v1.14.2 xx.xx.x.xx CentOS Linux 7 (Core) 3.10.0-957.12.2.el7.x86_64 docker: //18 .9.6 docker-worker000 Ready 20d v1.14.1 xx.xx.x.xx CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker: //18 .9.5 docker-worker001 Ready 20d v1.14.1 xx.xx.x.xx CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker: //18 .9.5 |
List all pods within the kube-system namespace:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | [docker@docker-manager000 ~]$ kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-6894d6f4f4-jtwnn 1 /1 Running 5 20d 192.168.123.39 docker-manager000 calico-node-9586j 1 /1 Running 6 20d xx.xx.x.xx docker-manager000 calico-node-lc6vh 1 /1 Running 3 20d xx.xx.x.xx docker-worker000 calico-node-ph9j7 1 /1 Running 3 20d xx.xx.x.xx docker-worker001 coredns-fb8b8dccf-cxfcq 1 /1 Running 4 11d 192.168.123.41 docker-manager000 coredns-fb8b8dccf-pf879 1 /1 Running 5 11d 192.168.27.145 docker-worker001 etcd-docker-manager000 1 /1 Running 6 20d xx.xx.x.xx docker-manager000 kube-apiserver-docker-manager000 1 /1 Running 2 11d xx.xx.x.xx docker-manager000 kube-controller-manager-docker-manager000 1 /1 Running 2 11d xx.xx.x.xx docker-manager000 kube-proxy-j8v2b 1 /1 Running 2 11d xx.xx.x.xx docker-worker000 kube-proxy-jp5dr 1 /1 Running 2 11d xx.xx.x.xx docker-worker001 kube-proxy-z57xx 1 /1 Running 2 11d xx.xx.x.xx docker-manager000 kube-scheduler-docker-manager000 1 /1 Running 2 11d xx.xx.x.xx docker-manager000 |
Your cluster is now up and running! You can start your first deployment
目录 返回
首页