企业运维

阅读: 评论:0

企业运维

企业运维

kubernetes-监控

      • k8s容器资源限制
          • 内存限制
          • CPU限制
      • k8s容器资源监控
          • metrics-server部署
          • Dashboard部署
      • HPA
      • Helm
        • 部署
          • Helm 添加第三方 Chart 库
          • Helm部署nfs-client-provisioner
          • Helm部署部署metrics-server监控


k8s容器资源限制

Kubernetes采用request和limit两种限制类型来对资源进行分配。
request(资源需求):即运行Pod的节点必须满足运行Pod的最基本需求才能运行Pod。
limit(资源限额):即运行Pod期间,可能内存使用量会增加,那最多能使用多少内存,这就是资源限额。

资源类型:
CPU 的单位是核心数,内存的单位是字节。
一个容器申请0.5个CPU,就相当于申请1个CPU的一半,你也可以加个后缀m 表示千分之一的概念。比如说100m的CPU,100豪的CPU和0.1个CPU都是一样的。
内存单位:
K、M、G、T、P、E #通常是以1000为换算标准的。
Ki、Mi、Gi、Ti、Pi、Ei #通常是以1024为换算标准的。

需要 image: stress

内存限制

编辑资源清单 需求200M limits100M pod不能运行

[root@server2 ~]# mkdir limit
[root@server2 ~]# cd limit/
[root@server2 limit]# vim pod.yaml
[root@server2 limit]# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:name: memory-demo
spec:containers:- name: memory-demoimage: stressargs:- --vm- "1"- --vm-bytes- 200Mresources:requests:memory: 50Milimits:memory: 100Mi[root@server2 limit]# kubectl apply -f pod.yaml 
pod/memory-demo created
[root@server2 limit]# kubectl get pod
NAME          READY   STATUS              RESTARTS   AGE
memory-demo   0/1     ContainerCreating   0          8s
mypod         1/1     Running             0          5h18m
[root@server2 limit]# kubectl get pod
NAME          READY   STATUS             RESTARTS   AGE
memory-demo   0/1     CrashLoopBackOff   2          52s
mypod         1/1     Terminating        0          5h19m
[root@server2 limit]# kubectl get pod
NAME          READY   STATUS             RESTARTS   AGE
memory-demo   0/1     CrashLoopBackOff   3          81s
mypod         0/1     Terminating        0          5h19m

更改limits为300 pod 可以runing

[root@server2 limit]# vim pod.yaml
[root@server2 limit]# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:name: memory-demo
spec:containers:- name: memory-demoimage: stressargs:- --vm- "1"- --vm-bytes- 200Mresources:requests:memory: 50Milimits:memory: 300Mi
[root@server2 limit]# kubectl delete -f pod.yaml 
pod "memory-demo" deleted
[root@server2 limit]# kubectl apply -f pod.yaml 
pod/memory-demo created
[root@server2 limit]# kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
memory-demo   1/1     Running   0          9s
CPU限制

Pending 是因为申请的CPU资源低于 requests

[root@server2 limit]# kubectl delete -f pod.yaml 
pod "memory-demo" deleted
[root@server2 limit]# vim pod1.yaml
[root@server2 limit]# cat pod1.yaml 
apiVersion: v1
kind: Pod
metadata:name: cpu-demo
spec:containers:- name: cpu-demoimage: stressresources:limits:cpu: "10"requests:cpu: "5"args:- -c- "2"
[root@server2 limit]# kubectl apply -f pod1.yaml 
pod/cpu-demo created
[root@server2 limit]# kubectl get pod
NAME       READY   STATUS    RESTARTS   AGE
cpu-demo   0/1     Pending   0          9s
[root@server2 limit]# kubectl describe pod 
Name:         cpu-demo
Namespace:    default
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:cpu-demo:Image:      stressPort:       <none>Host Port:  <none>Args:-c2Limits:cpu:  10Requests:cpu:        5Environment:  <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kr2rc (ro)
Conditions:Type           StatusPodScheduled   False 
Volumes:kube-api-access-kr2rc:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           ConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason            Age                 From               Message----     ------            ----                ----               -------Warning  FailedScheduling  25s (x3 over 108s)  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.

更改之后 runing

[root@server2 limit]# vim pod1.yaml 
[root@server2 limit]# cat pod1.yaml 
apiVersion: v1
kind: Pod
metadata:name: cpu-demo
spec:containers:- name: cpu-demoimage: stressresources:limits:cpu: "2"requests:cpu: "0.1"args:- -c- "2"
[root@server2 limit]# kubectl delete -f pod1.yaml 
pod "cpu-demo" deleted
[root@server2 limit]# kubectl apply -f pod1.yaml 
pod/cpu-demo created
[root@server2 limit]# kubectl get pod
NAME       READY   STATUS    RESTARTS   AGE
cpu-demo   1/1     Running   0          12s

running成功

为namespace设置资源限制

[root@server2 limit]# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:name: memory-demo
spec:containers:- name: memory-demoimage: nginx# resources:#   requests:#     memory: 50Mi#   limits:#     memory: 300Mi
[root@server2 limit]# vim limitrange.yaml
[root@server2 limit]# cat limitrange.yaml 
apiVersion: v1
kind: LimitRange
metadata:name: limitrange-memory
spec:limits:- default:cpu: 0.5memory: 512MidefaultRequest:cpu: 0.1memory: 256Mimax:cpu: 1memory: 1Gimin:cpu: 0.1memory: 100Mitype: Container[root@server2 limit]# kubectl apply -f limitrange.yaml 
limitrange/limitrange-memory created
[root@server2 limit]# kubectl describe limitranges 
Name:       limitrange-memory
Namespace:  default
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   cpu       100m   1    100m             500m           -
Container   memory    100Mi  1Gi  256Mi            512Mi          -
[root@server2 limit]# kubectl delete -f pod1.yaml 
pod "cpu-demo" deleted
[root@server2 limit]# kubectl apply -f pod.yaml 
pod/memory-demo created
[root@server2 limit]# kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
memory-demo   1/1     Running   0          4s

running成功

[root@server2 limit]# vim limitrange.yaml 
[root@server2 limit]# cat limitrange.yaml 
apiVersion: v1
kind: LimitRange
metadata:name: limitrange-memory
spec:limits:- default:cpu: 0.5memory: 512MidefaultRequest:cpu: 0.1memory: 256Mimax:cpu: 1memory: 1Gimin:cpu: 0.1memory: 100Mitype: Container
---
apiVersion: v1
kind: ResourceQuota
metadata:name: mem-cpu-demo
spec:hard:requests.cpu: ": 1Gilimits.cpu: ": 2Gi[root@server2 limit]# kubectl apply -f limitrange.yaml 
limitrange/limitrange-memory configured
resourcequota/mem-cpu-demo created

如果删除限制

[root@server2 limit]# kubectl delete limitranges limitrange-memory
limitrange "limitrange-memory" deleted
[root@server2 limit]# kubectl get limitranges 
No resources found in default namespace.
[root@server2 limit]# kubectl delete -f pod.yaml 
pod "memory-demo" deleted
[root@server2 limit]# kubectl apply -f pod.yaml 
Error from server (Forbidden): error when creating "pod.yaml": pods "memory-demo" is forbidden: failed quota: mem-cpu-demo: must specify limits.,requests.

报错:必须添加 memory cpu的限制

k8s容器资源监控

metrics-server部署

Metrics-Server是集群核心监控数据的聚合器,用来替换之前的heapster。

容器相关的 Metrics 主要来自于 kubelet 内置的 cAdvisor 服务,有了Metrics-Server之后,用户就可以通过标准的 Kubernetes API 来访问到这些监控数据。
Metrics API 只可以查询当前的度量数据,并不保存历史数据。
Metrics API URI 为 /apis/metrics.k8s.io/,在 k8s.io/metrics 维护。
必须部署 metrics-server 才能使用该 API,metrics-server 通过调用 Kubelet Summary API 获取数据。

需要 image:metrics-server

[root@server2 ~]# mkdir metrics-server
[root@server2 ~]# cd metrics-server/
[root@server2 metrics-server]# wget .yaml133         - --secure-port=4443
137         image: metrics-server:v0.5.0
148         - containerPort: 4443

启用TLS Bootstrap 证书签发
server2,3,4都在config.yaml最后面加上 serverTLSBootstrap: true

[root@server2 metrics-server]# vim /var/lib/kubelet/config.yaml 
[root@server3 ~]# vim /var/lib/kubelet/config.yaml 
[root@server4 ~]# vim /var/lib/kubelet/config.yaml serverTLSBootstrap: true[root@server2 metrics-server]# systemctl restart kubelet
[root@server3 ~]# systemctl restart kubelet
[root@server4 ~]# systemctl restart kubelet[root@server2 metrics-server]# kubectl get csr
NAME        AGE   SIGNERNAME                      REQUESTOR             CONDITION
csr-jtps7   48s   kubernetes.io/kubelet-serving   system:node:server3   Pending
csr-pp7hd   45s   kubernetes.io/kubelet-serving   system:node:server4   Pending
csr-z82vb   75s   kubernetes.io/kubelet-serving   system:node:server2   Pending
[root@server2 metrics-server]# kubectl certificate approve csr-jtps7
ificates.k8s.io/csr-jtps7 approved
[root@server2 metrics-server]# kubectl certificate approve csr-pp7hd
ificates.k8s.io/csr-pp7hd approved
[root@server2 metrics-server]# kubectl certificate approve csr-z82vb
ificates.k8s.io/csr-z82vb approved

因为没有内网的DNS服务器,所以metrics-server无法解析节点名字。可以直接修改coredns的configmap,将各个节点的主机名加入到hosts中

[root@server2 metrics-server]# kubectl -n kube-system edit cm coredns 14         hosts {15            172.25.12.2 server216            172.25.12.3 server317            172.25.12.4 server418            fallthrough19        }configmap/coredns edited

执行yaml

[root@server2 metrics-server]# kubectl apply -f components.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.ics.k8s.io created
[root@server2 metrics-server]# kubectl -n kube-system get pod
NAME                              READY   STATUS    RESTARTS   AGE
metrics-server-86d6b8bbcc-75g7n   1/1     Running   0          116s
[root@server2 metrics-server]# kubectl -n kube-system describe svc metrics-server
Name:              metrics-server
Namespace:         kube-system
Labels:            k8s-app=metrics-server
Annotations:       <none>
Selector:          k8s-app=metrics-server
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.106.131.25
IPs:               10.106.131.25
Port:              https  443/TCP
TargetPort:        https/TCP
Endpoints:         10.244.5.84:4443
Session Affinity:  None
Events:            <none>

部署成功

[root@server2 metrics-server]# kubectl top node
W0803 21:00:00.624096   16123 :119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME      CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
server2   193m         9%     1143Mi          60%       
server3   57m          5%     539Mi           60%       
server4   44m          4%     393Mi           44%       
Dashboard部署

镜像准备

[root@server1 ~

本文发布于:2024-02-02 18:46:03,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170687076145725.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:企业
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23