Kubernetes 资源配额 ResourceQuota
ResourceQuota
As you might already know, you can specify pods’ CPU and Memory requests and limits, and as Kubernetes already knows the pod placements, it can properly place your pods into such places that your requests are fulfilled. When a pod has memory requests set, your pod’s QoS (Quality of Service) class is Guaranteed, and when your limit is higher than requests, QoS class is Burstable. In other words, your pod gets at least the resources it desires, if there is space. However, limiting the total requests by namespace can be useful if you have many namespaces used by many projects or people so that namespaces get their fair shares. This is where ResourceQuota helps, and it can be defined as a simple YAML file as follows:
apiVersion: v1
kind: ResourceQuota
metadata:
name: my-cheap-namespace
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "16"
limits.memory: 16Gi
You can also limit the number of Kubernetes objects that a namespace can use. You can limit the total number of Pods to avoid scheduling overheads, the number of load balancers (which can be tied to a load balancer with an actual cost in a cloud provider, such as AWS Network Load Balancer) or the number of Persistent Volume Claims as they can be costly as well. An example configuration is as follows:
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-quota-demo
spec:
hard:
persistentvolumeclaims: "4"
services.loadbalancers: "3"
services.nodeports: "1"
资源配额 ResourceQuota
当多个团队、多个用户共享使用K8s集群时,会出现不均匀资源使用,默认情况下先到先得,这时可以通过ResourceQuota来对命名空间资源使用总量做限制,从而解决这个问题。
使用流程:k8s管理员为每个命名空间创建一个或多个ResourceQuota对象,定义资源使用总量,K8s会跟踪命名空间资源使用情况,当超过定义的资源配额会返回拒绝。
还可以基于存储类来控制PVC请求的总量。
计算资源配额
limits.cpu/memory 所有Pod上限资源配置总量不超过该值(所有非终止状态的Pod)
requests.cpu/memory 所有Pod请求资源配置总量不超过该值(所有非终止状态的Pod)
一个是请求资源,一个是上限资源
现在有多个开发团队,要对这些不同的开发团队做资源限制的隔离,resourcequota是基于命名空间的。现在假设A团队在集群当中可以使用2核4G。(需要理解的是resourcequota是给命名空间去配额,而不是给pod去配额,所有pod运行的总量和)
[root@master ~]# kubectl create ns dev1
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: dev1
spec:
hard:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "2.5"
limits.memory: 4.5Gi
[root@master test]# kubectl get quota -n dev1
NAME AGE REQUEST LIMIT
compute-resources 27s requests.cpu: 0/2, requests.memory: 0/4Gi limits.cpu: 0/2500m, limits.memory: 0/4608Mi
[root@master resourcequota]# cat podcpumem.yaml
apiVersion: v1
kind: Pod
metadata:
name: compute-resources
namespace: dev1
spec:
containers:
- image: nginx
name: nginx
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 1
memory: 1Gi
[root@master resourcequota]# kubectl apply -f podcpumem.yaml
pod/compute-resources created
#可以看到已经使用的资源,k8s帮你统计了命名空间使用的资源量
[root@master resourcequota]# kubectl get quota -n dev1
NAME AGE REQUEST LIMIT
compute-resources 8m49s requests.cpu: 1/2, requests.memory: 1Gi/4Gi limits.cpu: 1/2500m, limits.memory: 1Gi/4608Mi
[root@master resourcequota]# cat podcpumem1.yaml
apiVersion: v1
kind: Pod
metadata:
name: compute-resources1
namespace: dev1
spec:
containers:
- image: nginx
name: nginx
resources:
requests:
cpu: 1.5
memory: 1Gi
limits:
cpu: 2
memory: 1Gi
[root@master resourcequota]# kubectl apply -f podcpumem1.yaml
Error from server (Forbidden): error when creating "podcpumem1.yaml": pods "compute-resources1" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=2,requests.cpu=1500m, used: limits.cpu=1,requests.cpu=1, limited: limits.cpu=2500m,requests.cpu=2
requested: limits.cpu=2,requests.cpu=1500m 这个是我在pod当中的配置
used: limits.cpu=1,requests.cpu=1 使用的情况
limited: limits.cpu=2500m,requests.cpu=2 限制的情况
注意这里是双向限制,既限制了request,又限制了limit,只要超过了其中一个限制就会创建pod失败!只要哪个超过限制就报出哪个错误提示信息。
当pod当中没有限制,也就是没有配置resources的字段,但是配置了resource quota的时候,在创建pod的时候会报错,因为resources字段没有指定就是默认pod可以无限使用资源。
存储资源配额
<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims
在该命名空间中所有与<storage-class-name>相关的PVC请求容量总和不超过该值
限制PVC申请的总量,可以基于具体的存储类去进行限制。
不管多少个PVC,现在只要pvc总和请求量超过10G,就不会让其创建了
[root@master volume]# cat storage-resource.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: storage-resources
namespace: dev1
spec:
hard:
requests.storage: "10G"
[root@master volume]# kubectl apply -f storage-resource.yaml
resourcequota/storage-resources created
[root@master volume]# kubectl get quota -n dev1
NAME AGE REQUEST LIMIT
storage-resources 12s requests.storage: 0/10G
[root@master volume]# cat pvc-nfs-dy.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
namespace: dev1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 11Gi
storageClassName: managed-nfs-storage
[root@master volume]# kubectl apply -f pvc-nfs-dy.yaml
Error from server (Forbidden): error when creating "pvc-nfs-dy.yaml": persistentvolumeclaims "nfs-pvc" is forbidden: exceeded quota: storage-resources, requested: requests.storage=11Gi, used: requests.storage=0, limited: requests.storage=10G
hard:
requests.storage: "10G"
managed-nfs-storage.storageclass.storage.k8s.io/requests.storage: "5G"
上面的是基于所有的PVC去统计,下面是基于存储类创建的容量去统计,所以可以基于多维度的统计
对象数量配额
可以限制创建多少个pod和多少个service
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
namespace: test
spec:
hard:
pods: "10"
count/deployments.apps: "3"
count/services: "3"
最后补充
限制namespace可使⽤存储卷资源
使⽤场景:
存储作为有限资源,需要将该资源管控起来,防⽌过度使⽤,造成资源枯竭
操作:
注解1:Resource Quotas(资源配额,简称quota)是对namespace进⾏资源配额,限制资源使⽤的⼀种策略。 K8S是⼀个多⽤户架构,当多⽤户或者团队共享⼀个K8S系统时,SA使⽤quota防⽌⽤户(基于namespace的)的资源抢占,定义好资源分配策略。
注解2: Quota应⽤在Namespace上,默认情况下,没有Resource Quota的,需要另外创建Quota,并且每个Namespace最多只能有⼀个Quota对象
Resource Quotas举例:
创建resource qouta资源,需要根据实际情况,进⾏修改
kubectl create -f quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: test01
namespace: test01
spec:
hard:
persistentvolumeclaims: "10"
requests.storage: 110Gi
1. 查看Resource Quotas
root@i-o8fh83hu:~# kubectl get resourcequotas -A
NAMESPACE NAME CREATED AT
test01 test01 2021-06-03T06:21:02Z
2. 修改Resource Quotas
test01的命名空间⼤⼩为110G,修改配置:spec.hard.requests.storage: 110Gi
kubectl -n test01 edit resourcequotas test01 -oyaml
3. 查看是否⽣效
目录 返回
首页