虚拟化容器,大数据,DBA,中间件,监控。

Kubernetes Ceph RBD for dynamic provisioning

22 11月
作者:admin|分类:容器虚拟化

Kubernetes Ceph RBD for dynamic provisioning

In this post I will show you how can you use CEPH RBD for persistent storagi on Kubernetes.

Parst of the Kubernetes series

Environment

# openshift cluster
192.168.1.41  kubernetes01 # master node
192.168.1.42  kubernetes02 # frontend node
192.168.1.43  kubernetes03 # worker node
192.168.1.44  kubernetes04 # worker node
192.168.1.45  kubernetes05 # worker node

# ceph cluster
192.168.1.31    ceph01
192.168.1.32    ceph02
192.168.1.33    ceph03

Prerequirement

RBD volume provisioner needs admin key from Ceph to provision storage. To get the admin key from Ceph cluster use this command:

sudo ceph --cluster ceph auth get-key client.admin

nano ceph-admin-secret.yaml
apiVersion: v1
data:
  key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ==
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: kubernetes.io/rbd

I will also create a separate Ceph pool for

sudo ceph --cluster ceph osd pool create k8s 1024 1024
sudo ceph --cluster ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rwx pool=k8s'
sudo ceph --cluster ceph auth get-key client.k8s

nano ceph-secret-k8s.yaml
apiVersion: v1
data:
  key: QVFBOFF2SlZheUJQRVJBQWgvS2ctS2htOFNSZnRvclJPRk1jdXc9PQ==
kind: Secret
metadata:
  name: ceph-secret-k8s
  namespace: kube-system
type: kubernetes.io/rbd

# on all openshift node
wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg \
-O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg
chmod +r /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg

echo deb http://download.proxmox.com/debian/ceph-luminous $(lsb_release -sc) main \
> /etc/apt/sources.list.d/ceph.list

apt-get update
apt-get install ceph-common -y
cat <<EOF > rbd-provisioner.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
#    resourceNames: ["kube-dns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: rbd-provisioner
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rbd-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:v1.0.0-k8s1.10"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner
EOF

kubectl create -n kube-system -f rbd-provisioner.yaml
kubectl get pods -l app=rbd-provisioner -n kube-system

Please check that quay.io/external_storage/rbd-provisioner:latest image has the same Ceph version as your Ceph cluster.

docker pull quay.io/external_storage/rbd-provisioner:latest
docker history quay.io/external_storage/rbd-provisioner:latest | grep CEPH_VERSION

# pfroxmox ceph use luminous so I'will use v1.0.0-k8s1.10
docker history quay.io/external_storage/rbd-provisioner:v1.0.0-k8s1.10 | grep CEPH_VERSION
<missing>           13 months ago       /bin/sh -c #(nop)  ENV CEPH_VERSION=luminous    0B
# on one openshift master node
nano  k8s-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: k8s
parameters:
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: kube-system
  imageFeatures: layering
  imageFormat: "2"
  monitors: 192.168.1.31.xip.io:6789, 192.168.1.32.xip.io:6789, 192.168.1.33.xip.io:6789
  pool: k8s
  userId: k8s
  userSecretName: ceph-secret-k8s
provisioner: ceph.com/rbd
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true


kubectl apply -f ceph-admin-secret.yaml
kubectl apply -f ceph-secret-k8s.yaml
kubectl apply -f k8s-storage.yaml
浏览438 评论0
返回
目录
返回
首页
Install cert-manager for Kubernetes Openshift Ceph RBD for dynamic provisioning