RKE2 Install With cilium
RKE2 Install With cilium
In this post I will show you how you can install a RKE2 in with cilium’s encripted VXLAN.
Parst of the K8S Security series
- Part1: Best Practices to keeping Kubernetes Clusters Secure
- Part2: Kubernetes Hardening Guide with CIS 1.6 Benchmark
- Part3: RKE2 The Secure Kubernetes Engine
- Part4: RKE2 Install With cilium
- Part5: Kubernetes Certificate Rotation
- Part6: Hardening Kubernetes with seccomp
- Part7: RKE2 Pod Security Policy
- Part8: Kubernetes Network Policy
- Part9: Kubernetes Cluster Policy
- Part10: Using Admission Controllers
- Part11a: Image security Admission Controller
- Part11b: Image security Admission Controller V2
- Part11c: Image security Admission Controller V3
- Part12: Continuous Image security
- Part13: K8S Logging And Monitoring
- Part14: Kubernetes audit logs and Falco
- Part15a Image Signature Verification with Connaisseur
- Part15b Image Signature Verification with Connaisseur 2.0
- Part15c Image Signature Verification with Kyverno
- Part16a Backup your Kubernetes Cluster
- Part16b How to Backup Kubernetes to git?
- Part17a Kubernetes and Vault integration
- Part17b Kubernetes External Vault integration
- Part18a: ArgoCD and kubeseal to encript secrets
- Part18b: Flux2 and kubeseal to encrypt secrets
- Part18c: Flux2 and Mozilla SOPS to encrypt secrets
- Part19: ArgoCD auto image updater
- Part20: Secure k3s with gVisor
- Part21: How to use imagePullSecrets cluster-wide??
- Part22: Automatically change registry in pod definition
What is Cilium?
Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes.
At the foundation of Cilium is a new Linux kernel technology called eBPF, which enables the dynamic insertion of powerful security visibility and control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration. (Source: cilium.io )
What is Hubble?
Hubble is a fully distributed networking and security observability platform. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.
By building on top of Cilium, Hubble can leverage eBPF for visibility. By relying on eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed visibility as required by users. Hubble has been created and specifically designed to make best use of these new eBPF powers. (Source: cilium.io )
RKE2 Setup
Project Longhorn Prerequisites
yum install -y epel-release
yum install -y nano curl wget git tmux jq vim-common
yum install -y iscsi-initiator-utils
modprobe iscsi_tcp
echo "iscsi_tcp" >/etc/modules-load.d/iscsi-tcp.conf
systemctl enable iscsid
systemctl start iscsid
cat <<EOF>> /etc/NetworkManager/conf.d/rke2-canal.conf [keyfile] unmanaged-devices=interface-name:cali*;interface-name:flannel* EOF
systemctl reload NetworkManager
RKE2 rpm Install
cat << EOF > /etc/yum.repos.d/rancher-rke2-1-20-latest.repo [rancher-rke2-common-latest] name=Rancher RKE2 Common Latest baseurl=https://rpm.rancher.io/rke2/latest/common/centos/8/noarch enabled=1 gpgcheck=1 gpgkey=https://rpm.rancher.io/public.key [rancher-rke2-1-20-latest] name=Rancher RKE2 1.20 Latest baseurl=https://rpm.rancher.io/rke2/latest/1.20/centos/8/x86_64 enabled=1 gpgcheck=1 gpgkey=https://rpm.rancher.io/public.key EOF
yum -y install rke2-server
Kubectl, Helm & RKE2
Install kubectl
, helm
and RKE2 to the host system:
sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx
sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens
echo 'PATH=$PATH:/usr/local/bin' >> /etc/profile
echo 'PATH=$PATH:/var/lib/rancher/rke2/bin' >> /etc/profile
source /etc/profile
sudo dnf copr -y enable cerenit/helm
sudo dnf install -y helm
RKE2 specific ports
sudo firewall-cmd --add-port=9345/tcp --permanent
sudo firewall-cmd --add-port=6443/tcp --permanent
sudo firewall-cmd --add-port=10250Air-Gap/tcp --permanent
sudo firewall-cmd --add-port=2379/tcp --permanent
sudo firewall-cmd --add-port=2380/tcp --permanent
sudo firewall-cmd --add-port=30000-32767/tcp --permanent
# Used for the Rancher Monitoring
sudo firewall-cmd --add-port=9796/tcp --permanent
sudo firewall-cmd --add-port=19090/tcp --permanent
sudo firewall-cmd --add-port=6942/tcp --permanent
sudo firewall-cmd --add-port=9091/tcp --permanent
### CNI specific ports
# 4244/TCP is required when the Hubble Relay is enabled and therefore needs to connect to all agents to collect the flows
sudo firewall-cmd --add-port=4244/tcp --permanent
# Cilium healthcheck related permits:
sudo firewall-cmd --add-port=4240/tcp --permanent
sudo firewall-cmd --remove-icmp-block=echo-request --permanent
sudo firewall-cmd --remove-icmp-block=echo-reply --permanent
# Since we are using Cilium with GENEVE as overlay, we need the following port too:
sudo firewall-cmd --add-port=6081/udp --permanent
### Ingress Controller specific ports
sudo firewall-cmd --add-port=80/tcp --permanent
sudo firewall-cmd --add-port=443/tcp --permanent
### To get DNS resolution working, simply enable Masquerading.
sudo firewall-cmd --zone=public --add-masquerade --permanent
### Finally apply all the firewall changes
sudo firewall-cmd --reload
Verification:
sudo firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eno1
sources:
services: cockpit dhcpv6-client ssh wireguard
ports: 9345/tcp 6443/tcp 10250/tcp 2379/tcp 2380/tcp 30000-32767/tcp 4240/tcp 6081/udp 80/tcp 443/tcp 4244/tcp 9796/tcp 19090/tcp 6942/tcp 9091/tcp
protocols:
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Basic Configuration
mkdir -p /etc/rancher/rke2
cat << EOF > /etc/rancher/rke2/config.yaml write-kubeconfig-mode: "0644" profile: "cis-1.5" selinux: true # add ips/hostname of hosts and loadbalancer tls-san: - "k8s.mydomain.intra" - "172.17.9.10" # Make a etcd snapshot every 6 hours etcd-snapshot-schedule-cron: " */6 * * *" # Keep 56 etcd snapshorts (equals to 2 weeks with 6 a day) etcd-snapshot-retention: 56 cni: - cilium disable: - rke2-canal - rke2-kube-proxy EOF
Note: I disabled rke2-canal
and rke2-kube-proxy
since I plan to install Cilium as CNI in “kube-proxy less mode” (kubeProxyReplacement: "strict"
). Do not disable rke2-kube-proxy
if you use another CNI - it will not work afterwards!
sudo cp -f /usr/share/rke2/rke2-cis-sysctl.conf /etc/sysctl.d/60-rke2-cis.conf
sysctl -p /etc/sysctl.d/60-rke2-cis.conf
useradd -r -c "etcd user" -s /sbin/nologin -M etcd
mkdir -p /var/lib/rancher/rke2/server/manifests/
cat << EOF > /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml apiVersion: helm.cattle.io/v1 kind: HelmChartConfig metadata: name: rke2-ingress-nginx namespace: kube-system spec: valuesContent: |- controller: metrics: service: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" EOF
Prevent RKE2 Package Updates
In order to provide more stability, I chose to DNF/YUM “mark/hold” the RKE2 related packages so a dnf update
/yum update
does not mess around with them.
Add the following line to /etc/dnf/dnf.conf
and/or /etc/yum.conf
:
exclude=rke2-*
Cilium Prerequisites
Ensure the eBFP filesystem is mounted (which should already be the case on RHEL 8.3):
mount | grep /sys/fs/bpf
# if present should output, e.g. "none on /sys/fs/bpf type bpf"...
If that’s not the case, mount it using the commands down here:
sudo mount bpffs -t bpf /sys/fs/bpf
sudo bash -c 'cat <<EOF >> /etc/fstab none /sys/fs/bpf bpf rw,relatime 0 0 EOF'
Deploy Cilium
Cilium’s eBPF kube-proxy replacement currently cannot be used with Transparent Encryption.
cat << EOF > /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-cilium
namespace: kube-system
spec:
valuesContent: |- cilium: kubeProxyReplacement: "strict" k8sServiceHost: 10.0.2.15 k8sServicePort: 6443 operator: replicas: 1 encryption: enabled: false type: wireguard l7Proxy: false hubble: metrics: enabled: - dns:query;ignoreAAAA - drop - tcp - flow - icmp - http relay: enabled: true ui: enabled: true replicas: 1 ingress: enabled: true hosts: - hubble.k8s.intra annotations: cert-manager.io/cluster-issuer: ca-issuer tls: - secretName: ingress-hubble-ui-tls hosts: - hubble.k8s.intra prometheus: enabled: true # Default port value (9090) needs to be changed since the RHEL cockpit also listens on this port. port: 19090 # Configure this serviceMonitor section AFTER Rancher Monitoring is enabled! #serviceMonitor: # enabled: true
EOF
Starting RKE2
Enable the rke2-server
service and start it:
sudo systemctl enable rke2-server --now
Verification:
sudo systemctl status rke2-server
sudo journalctl -u rke2-server -f
Configure Kubectl (on RKE2 Host)
mkdir ~/.kube
ln -s /etc/rancher/rke2/rke2.yaml ~/.kube/config
chmod 600 /root/.kube/config
ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
kubectl get node
crictl ps
crictl images
Verification:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s.mydomain.intra NotReady etcd,master 2m4s v1.18.16+rke2r1
Deploy demo app
kubens default
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/minikube/http-sw-app.yaml
kubectl apply -f k8s_sec_lab/manifest/cilium_demo_rb.yaml
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
目录 返回
首页