对于Prometheus的组件能力是毋庸置疑的,但是使用久了会发现很多的性能问题,诸如内存问题、大规模拉取问题、大规模存储问题等等。如何基于云原生Prometheus进行Kubernetes集群基础监控大规模数据拉取,本文将会给出答案。
架构图
上图是我们当前的监控平台架构图,根据架构图可以看出我们当前的监控平台结合了多个成熟开源组件和能力完成了当前集群的数据+指标+展示的工作。
当前我们监控不同的Kubernetes集群,包含不同功能、不同业务的集群,包含业务、基础和告警信息。
针对Kubernetes集群监控
我们采用常见的2种监控架构之一:
Prometheus-operator
Prometheus单独配置(选择的架构)
tips:对于Prometheus-operator确实易于部署化、简单的ServiceMonitor省了很大的力气,不过对于我们这样多种私有化集群来说维护成本稍微有点高,我们选择第二种方案更多的是想省略创建服务发现的步骤,更多的采用服务发现、服务注册的能力。
数据拉取
在数据拉取方面我们做了一定的调整,为了应对大规模节点或者数据对于apiserver的大压力问题和大规模数据拉取Prometheus内存OOM问题。
利用Kubernetes做服务发现,监控数据拉取由Prometheus之间拉取,降低apiserver拉取压力
采用Hashmod方式进行分布式拉取缓解内存压力
RBAC权限修改:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:name: prometheusnamespace: monitoring
rules:
- apiGroups: [""]resources:- nodes- nodes/proxy- nodes/metrics #新增路径为了外部拉取- nodes/metrics/cadvisor #新增路径为了外部拉取- services- endpoints- podsverbs: ["get", "list", "watch"]
- apiGroups:- extensionsresources:- ingressesverbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:name: prometheusnamespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: prometheusnamespace: monitoring
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: prometheus
subjects:
- kind: ServiceAccountname: prometheusnamespace: monitoring
需要新增对于Node节点的/metrics和/metrics/cadvsior路径的拉取权限。
以完整配置拉取示例:
对于Thanos的数据写入提供写入阿里云OSS示例
对于node_exporter数据提取,线上除Kubernetes外皆使用Consul作为配置注册和发现
对于业务自定义基于Kubernetes做服务发现和拉取
主机命名规则
机房-业务线-业务属性-序列数(例:bja-athena-etcd-001)
Consul自动注册示例脚本
#!/bin/bash#ip=$(ip addr show eth0|grep inet | awk '{ print $2; }' | sed 's//.*$//')
ip=$(ip addr | egrep -o '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' | egrep "^192.168|^172.21|^10.101|^10.100" | egrep -v ".255$" | awk -F. '{print $1"."$2"."$3"."$4}' | head -n 1)
ahost=`echo $HOSTNAME`
idc=$(echo $ahost|awk -F "-" '{print $1}')
app=$(echo $ahost|awk -F "-" '{print $2}')
group=$(echo $ahost|awk -F "-" '{print $3}')if [ "$app" != "test" ]
then
echo "success"
curl -X PUT -d "{"ID": "${ahost}_${ip}_node", "Name": "node_exporter", "Address": "${ip}", "tags": ["idc=${idc}","group=${group}","app=${app}","server=${ahost}"], "Port": 9100,"checks": [{"tcp":"${ip}:9100","interval": "60s"}]}" consul_server:8500/v1/agent/service/register
fi
完整配置文件示例
apiVersion: v1
kind: ConfigMap
metadata:name: prometheus-confignamespace: monitoring
data:bucket.yaml: |type: S3config:bucket: "gcl-download"endpoint: "gcl-download.oss-cn-beijing.aliyuncs"access_key: "xxxxxxxxxxxxxx"insecure: falsesignature_version2: falsesecret_key: "xxxxxxxxxxxxxxxxxx"http_config:idle_conn_timeout: l: |global:scrape_interval: 15sevaluation_interval: 15sexternal_labels:monitor: 'k8s-sh-prod'service: 'k8s-all'ID: 'ID_NUM'remote_write:- url: "vmstorage:8400/insert/0/prometheus/"remote_read:- url: "vmstorage:8401/select/0/prometheus"scrape_configs:- job_name: 'kubernetes-apiservers'kubernetes_sd_configs:- role: endpointsscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keepregex: default;kubernetes;https - job_name: 'kubernetes-cadvisor'kubernetes_sd_configs:- role: nodescheme: httpstls_config:#ca_file: /var/run/secrets/kubernetes.io/insecure_skip_verify: truebearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token#bearer_token: monitoringrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)- source_labels: [__meta_kubernetes_node_address_InternalIP]regex: (.+)target_label: __address__replacement: ${1}:10250- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /metrics/cadvisor- source_labels: [__meta_kubernetes_node_name]modulus: 10target_label: __tmp_hashaction: hashmod- source_labels: [__tmp_hash]regex: ID_NUMaction: keepmetric_relabel_configs:- source_labels: [container]regex: (.+)target_label: container_namereplacement: $1action: replace- source_labels: [pod]regex: (.+)target_label: pod_namereplacement: $1action: replace- job_name: 'kubernetes-nodes'kubernetes_sd_configs:- role: nodescheme: httpstls_config:#ca_file: /var/run/secrets/kubernetes.io/insecure_skip_verify: truebearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token#bearer_token: monitoringrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)- source_labels: [__meta_kubernetes_node_address_InternalIP]regex: (.+)target_label: __address__replacement: ${1}:10250- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /metrics- source_labels: [__meta_kubernetes_node_name]modulus: 10target_label: __tmp_hashaction: hashmod- source_labels: [__tmp_hash]regex: ID_NUMaction: keepmetric_relabel_configs:- source_labels: [container]regex: (.+)target_label: container_namereplacement: $1action: replace- source_labels: [pod]regex: (.+)target_label: pod_namereplacement: $1action: replace- job_name: 'kubernetes-service-endpoints'kubernetes_sd_configs:- role: endpointsnamespaces:names:- monitoringrelabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:]+)(?::d+)?;(d+)replacement: $1:$2- action: labelmapregex: __meta_kubernetes_service_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name- job_name: 'kubernetes-pods'kubernetes_sd_configs:- role: podnamespaces:names:- defaultrelabel_configs:- action: labelmapregex: __meta_kubernetes_pod_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_pod_name]action: replacetarget_label: kubernetes_pod_name- job_name: 'ingress-nginx-endpoints'kubernetes_sd_configs:- role: podnamespaces:names:- nginx-ingressrelabel_configs:- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:]+)(?::d+)?;(d+)replacement: $1:$2- job_name: 'node_exporter'consul_sd_configs:- server: 'consul_server:8500'relabel_configs:- source_labels: [__address__]modulus: 10target_label: __tmp_hashaction: hashmod- source_labels: [__tmp_hash]regex: ID_NUMaction: keep- source_labels: [__tmp_hash]regex: '(.*)'replacement: '${1}'target_label: hash_num- source_labels: [__meta_consul_tags]regex: .*test.*action: drop- source_labels: [__meta_consul_tags]regex: ',(?:[^,]+,){0}([^=]+)=([^,]+),.*'replacement: '${2}'target_label: '${1}'- source_labels: [__meta_consul_tags]regex: ',(?:[^,]+,){1}([^=]+)=([^,]+),.*'replacement: '${2}'target_label: '${1}'- source_labels: [__meta_consul_tags]regex: ',(?:[^,]+,){2}([^=]+)=([^,]+),.*'replacement: '${2}'target_label: '${1}'- source_labels: [__meta_consul_tags]regex: ',(?:[^,]+,){3}([^=]+)=([^,]+),.*'replacement: '${2}'target_label: '${1}'- source_labels: [__meta_consul_tags]regex: ',(?:[^,]+,){4}([^=]+)=([^,]+),.*'replacement: '${2}'target_label: '${1}'- source_labels: [__meta_consul_tags]regex: ',(?:[^,]+,){5}([^=]+)=([^,]+),.*'replacement: '${2}'target_label: '${1}'- source_labels: [__meta_consul_tags]regex: ',(?:[^,]+,){6}([^=]+)=([^,]+),.*'replacement: '${2}'target_label: '${1}'- source_labels: [__meta_consul_tags]regex: ',(?:[^,]+,){7}([^=]+)=([^,]+),.*'replacement: '${2}'target_label: '${1}'- job_name: '自定义业务监控'proxy_url: 127.0.0.1:8888 #根据业务属性scrape_interval: 5smetrics_path: '/' #根据业务提供路径params: ##根据业务属性是否带有method: ['get'] kubernetes_sd_configs:- role: podrelabel_configs:- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:]+)(?::d+)?;(d+)replacement: $1:$2- source_labels: [__meta_kubernetes_pod_annotation_prometheus_name_label]action: keepregex: monitor #业务自定义label- action: labelmapregex: __meta_kubernetes_pod_label_(.+)- source_labels: [__meta_kubernetes_pod_name]action: keepregex: (.*)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace
自定义业务拉取标识(可集成CI/CD)
template:metadata:annotations:prometheus.io/port: "port" #业务端口prometheus.io/scrape: "true"prometheus.name/label: monitor #自定义标签
Hashmod配置方式
1、针对官方的镜像新增Hashmod模块分配值
Dockerfile:
FROM prometheus/prometheus:2.20.0
MAINTAINER name gecailongCOPY ./entrypoint.sh /binENTRYPOINT ["/bin/entrypoint.sh"]
entrypoint.sh:
#!/bin/shID=${POD_NAME##*-}cp /etc/l /lsed -i "s/ID_NUM/$ID/g" /l/bin/prometheus --config.file=/l --query.max-concurrency=20 --storage.tsdb.path=/prometheus --storage.tsdb.max-block-duration=2h --storage.tsdb.min-block-duration=2h --ion=2h --web.listen-address=:9090 --able-lifecycle --able-admin-api
IDNUM:为我们后面配置做准备
2、Prometheus部署
Prometheus配置文件:
l: |external_labels:monitor: 'k8s-sh-prod'service: 'k8s-all'ID: 'ID_NUM'...
这个ID是为了我们在查询的时候可以区分同时也可以作为等下Hashmod模块的对应值。
部署文件:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:labels:app: prometheusname: prometheus-stsnamespace: monitoring
spec:serviceName: "prometheus"replicas: 10 #Hashmod总模块数selector:matchLabels:app: prometheustemplate:metadata:labels:app: prometheusspec:containers:- image: gecailong/prometheus-hash:0.0.1name: prometheussecurityContext:runAsUser: 0command:- "/bin/entrypoint.sh"env:- name: POD_NAME #根据StatefulSet的特性传入Pod名称用于模块取值valueFrom:fieldRef:apiVersion: v1fieldPath: metadata.nameports:- name: httpcontainerPort: 9090protocol: TCPvolumeMounts:- mountPath: "/etc/prometheus"name: config-volume- mountPath: "/prometheus"name: dataresources:requests:cpu: 500mmemory: 1000Milimits:memory: 2000Mi- image: gecailong/prometheus-thanos:v0.17.1name: sidecarimagePullPolicy: IfNotPresentargs:- "sidecar"- "--grpc-address=0.0.0.0:10901"- "--grpc-grace-period=1s"- "--http-address=0.0.0.0:10902"- "--http-grace-period=1s"- "--prometheus.url=127.0.0.1:9090"- "--tsdb.path=/prometheus"- "--log.level=info"- "--fig-file=/etc/prometheus/bucket.yaml"ports:- name: http-sidecarcontainerPort: 10902- name: grpc-sidecarcontainerPort: 10901volumeMounts:- mountPath: "/etc/prometheus"name: config-volume- mountPath: "/prometheus"name: dataserviceAccountName: prometheushostNetwork: truednsPolicy: ClusterFirstWithHostNetimagePullSecrets: - name: regsecretvolumes:- name: config-volumeconfigMap:name: prometheus-config- name: datahostPath:path: /data/prometheus
数据聚合
Thanos我们从18年一开始就用的它,虽然一开始的版本有很多bug,也给我们带来了很多困扰,同时我们也提了很多的issue,慢慢的稳定之后,我们在此之前线上都是使用v0.2.1版本,最新的版本已经去除了基于grpc cluster服务发现的功能,UI也更加的丰富。我们也进行了监控平台架构重构。
我们数据聚合采用Thanos进行查询数据聚合,同时后面我们提到的数据存储组件victoriametrics也可以实现数据聚合的功能,针对Thanos,我们主要使用它的几个子组件:query、sidecar、rule,至于其他的组件如compact、store、bucket等依据自己的业务没有进行使用。
我们的Thanos+Prometheus的架构图已在开头展示,以下仅给出部署和注意事项:
Thanos组件部署:
sidecar(我们采用和Prometheus放在同一Pod):
- image: gecailong/prometheus-thanos:v0.17.1name: thanosimagePullPolicy: IfNotPresentargs:- "sidecar"- "--grpc-address=0.0.0.0:10901"- "--grpc-grace-period=1s"- "--http-address=0.0.0.0:10902"- "--http-grace-period=1s"- "--prometheus.url=127.0.0.1:9090"- "--tsdb.path=/prometheus"- "--log.level=info"- "--fig-file=/etc/prometheus/bucket.yaml"ports:- name: http-sidecarcontainerPort: 10902- name: grpc-sidecarcontainerPort: 10901env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.namevolumeMounts:- mountPath: "/etc/prometheus"name: config-volume- mountPath: "/prometheus"name: data
query组件部署:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:labels:app: queryname: thanos-querynamespace: monitoring
spec:replicas: 3selector:matchLabels:app: querytemplate:metadata:labels:app: queryspec:containers:- image: gecailong/prometheus-thanos:v0.17.1name: queryimagePullPolicy: IfNotPresentargs:- "query"- "--http-address=0.0.0.0:19090"- "--grpc-address=0.0.0.0:10903"- "--store=dnssrv+_grpc._itoring.svc.cluster.local"- "--store=dnssrv+_grpc._itoring.svc.cluster.local"- "--store=dnssrv+_grpc._itoring.svc.cluster.local"ports:- name: http-querycontainerPort: 19090- name: grpc-querycontainerPort: 10903env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.namehostNetwork: truednsPolicy: ClusterFirstWithHostNet
rule组件部署:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:labels:app: queryname: thanos-rulenamespace: monitoring
spec:replicas: 2serviceName: "sidecar-rule"selector:matchLabels:app: ruletemplate:metadata:labels:app: rulespec:containers:- image: gecailong/prometheus-thanos:v0.17.1name: ruleimagePullPolicy: IfNotPresentargs:- "rule"- "--http-address=0.0.0.0:10902"- "--grpc-address=0.0.0.0:10901"- "--data-dir=/data"- "--rule-file=/prometheus-rules/*.yaml"- "--alert.query-url=sidecar-query:19090"- "--alertmanagers.url=alertmanager:9093"- "--query=sidecar-query:19090"env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.namevolumeMounts:- mountPath: "/prometheus-rules"name: config-volume- mountPath: "/data"name: dataresources:requests:cpu: 100mmemory: 100Milimits:memory: 1500MihostNetwork: truednsPolicy: ClusterFirstWithHostNetvolumes:- name: config-volumeconfigMap:name: prometheus-rule- name: datahostPath:path: /data/prometheus
rule通用告警规则和配置:
apiVersion: v1
kind: ConfigMap
metadata:name: prometheus-rulenamespace: monitoring
data:k8s_cluster_rule.yaml: |+groups:- name: pod_etcd_monitorrules:- alert: pod_etcd_num_is_changingexpr: sum(kube_pod_info{pod=~"etcd.*"})by(monitor) < 3for: 1mlabels:level: highservice: etcdannotations:summary: "集群:{{ $itor }},etcd集群pod低于正常总数"description: "总数为3,当前值是{{ $value}}"- name: pod_scheduler_monitorrules:- alert: pod_scheduler_num_is_changingexpr: sum(kube_pod_info{pod=~"kube-scheduler.*"})by(monitor) < 3for: 1mlabels:level: highservice: schedulerannotations:summary: "集群:{{ $itor }},scheduler集群pod低于正常总数"description: "总数为3,当前值是{{ $value}}"- name: pod_controller_monitorrules:- alert: pod_controller_num_is_changingexpr: sum(kube_pod_info{pod=~"kube-controller-manager.*"})by(monitor) < 3for: 1mlabels:level: highservice: controllerannotations:summary: "集群:{{ $itor }},controller集群pod低于正常总数"description: "总数为3,当前值是{{ $value}}"- name: pod_apiserver_monitorrules:- alert: pod_apiserver_num_is_changingexpr: sum(kube_pod_info{pod=~"kube-apiserver.*"})by(monitor) < 3for: 1mlabels:level: highservice: controllerannotations:summary: "集群:{{ $itor }},apiserver集群pod低于正常总数"description: "总数为3,当前值是{{ $value}}"k8s_master_resource_rules.yaml: |+groups:- name: node_cpu_resource_monitorrules:- alert: 节点CPU使用量expr: sum(kube_pod_container_resource_requests_cpu_cores{node=~".*"})by(node)/sum(kube_node_status_capacity_cpu_cores{node=~".*"})by(node)>0.7for: 1mlabels:level: disasterservice: nodeannotations:summary: "集群NODE节点总的CPU使用核数已经超过了70%"description: "集群:{{ $itor }},节点:{{ $de }}当前值为{{ $value }}!"- name: node_memory_resource_monitorrules:- alert: 节点内存使用量expr: sum(kube_pod_container_resource_limits_memory_bytes{node=~".*"})by(node)/sum(kube_node_status_capacity_memory_bytes{node=~".*"})by(node)>0.7for: 1mlabels:level: disasterservice: nodeannotations:summary: "集群NODE节点总的memory使用核数已经超过了70%"description: "集群:{{ $itor }},节点:{{ $de }}当前值为{{ $value }}!"- name: 节点POD使用率rules:- alert: 节点pod使用率expr: sum by(node,monitor) (kube_pod_info{node=~".*"}) / sum by(node,monitor) (kube_node_status_capacity_pods{node=~".*"})> 0.9for: 1mlabels:level: disasterservice: nodeannotations:summary: "集群NODE节点总的POD使用数量已经超过了90%"description: "集群:{{ $itor }},节点:{{ $de }}当前值为{{ $value }}!" - name: master_cpu_usedrules:- alert: 主节点CPU使用率expr: sum(kube_pod_container_resource_limits_cpu_cores{node=~'master.*'})by(node)/sum(kube_node_status_capacity_cpu_cores{node=~'master.*'})by(node)>0.7for: 1mlabels:level: disasterservice: nodeannotations:summary: "集群Master节点总的CPU申请核数已经超过了0.7,当前值为{{ $value }}!"description: "集群:{{ $itor }},节点:{{ $de }}当前值为{{ $value }}!" - name: master_memory_resource_monitorrules:- alert: 主节点内存使用率expr: sum(kube_pod_container_resource_limits_memory_bytes{node=~'master.*'})by(node)/sum(kube_node_status_capacity_memory_bytes{node=~'master.*'})by(node)>0.7for: 1mlabels:level: disasterservice: nodeannotations:summary: "集群Master节点总的内存使用量已经超过了70%"description: "集群:{{ $itor }},节点:{{ $de }}当前值为{{ $value }}!"- name: master_pod_resource_monitorrules:- alert: 主节点POD使用率expr: sum(kube_pod_info{node=~"master.*"}) by (node) / sum(kube_node_status_capacity_pods{node=~"master.*"}) by (node)>0.7for: 1mlabels:level: disasterservice: nodeannotations:summary: "集群Master节点总的POD使用数量已经超过了70%"description: "集群:{{ $itor }},节点:{{ $de }}当前值为{{ $value }}!" k8s_node_rule.yaml: |+groups:- name: K8sNodeMonitorrules:- alert: 集群节点资源监控expr: kube_node_status_condition{condition=~"OutOfDisk|MemoryPressure|DiskPressure",status!="false"} ==1for: 1mlabels:level: disasterservice: nodeannotations:summary: "集群节点内存或磁盘资源短缺"description: "节点:{{ $de }},集群:{{ $itor }},原因:{{ $dition }}"- alert: 集群节点状态监控expr: sum(kube_node_status_condition{condition="Ready",status!="true"})by(node) == 1for: 2mlabels:level: disasterservice: nodeannotations:summary: "集群节点状态出现错误"description: "节点:{{ $de }},集群:{{ $itor }}"- alert: 集群POD状态监控expr: sum (kube_pod_container_status_terminated_reason{reason!~"Completed|Error"}) by (pod,reason) ==1for: 1mlabels:level: highservice: podannotations:summary: "集群pod状态出现错误"description: "集群:{{ $itor }},名称:{{ $labels.pod }},原因:{{ $ason}}"- alert: 集群节点CPU使用监控expr: sum(node_load1) BY (instance) / sum(rate(node_cpu_seconds_total[1m])) BY (instance) > 2for: 5mlabels:level: disasterservice: nodeannotations:summary: "机器出现cpu平均负载过高"description: "节点: {{ $labels.instance }}平均每核大于2"- alert: NodeMemoryOver80Percentexpr: (1 - avg by (instance)(node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes))* 100 >85for: 1mlabels:level: disasterservice: nodeannotations:summary: "机器出现内存使用超过85%"description: "节点: {{ $labels.instance }}"k8s_pod_rule.yaml: |+groups:- name: pod_status_monitorrules:- alert: pod错误状态监控expr: changes(kube_pod_status_phase{phase=~"Failed"}[5m]) >0for: 1mlabels:level: highservice: pod-failedannotations:summary: "集群:{{ $itor }}存在pod状态异常"description: "pod:{{$labels.pod}},状态:{{$labels.phase}}"- alert: pod异常状态监控expr: sum(kube_pod_status_phase{phase="Pending"})by(namespace,pod,phase)>0for: 3mlabels:level: highservice: pod-pendingannotations:summary: "集群:{{ $itor }}存在pod状态pening异常超10分钟"description: "pod:{{$labels.pod}},状态:{{$labels.phase}}"- alert: pod等待状态监控expr: sum(kube_pod_container_status_waiting_reason{reason!="ContainerCreating"})by(namespace,pod,reason)>0for: 1mlabels:level: highservice: pod-waitannotations:summary: "集群:{{ $itor }}存在pod状态Wait异常超5分钟"description: "pod:{{$labels.pod}},状态:{{$ason}}"- alert: pod非正常状态监控expr: sum(kube_pod_container_status_terminated_reason)by(namespace,pod,reason)>0for: 1mlabels:level: highservice: pod-nocomannotations:summary: "集群:{{ $itor }}存在pod状态Terminated异常超5分钟"description: "pod:{{$labels.pod}},状态:{{$ason}}"- alert: pod重启监控expr: changes(kube_pod_container_status_restarts_total[20m])>3for: 3mlabels:level: highservice: pod-restartannotations:summary: "集群:{{ $itor }}存在pod半小时之内重启次数超过3次!"description: "pod:{{$labels.pod}}"- name: deployment_replicas_monitorrules:- alert: deployment监控expr: sum(kube_deployment_status_replicas_unavailable)by(namespace,deployment) >2for: 3mlabels:level: highservice: deployment-replicasannotations:summary: "集群:{{ $itor }},deployment:{{$labels.deployment}} 副本数未达到期望值! "description: "空间:{{$labels.namespace}},当前不可用副本:{{$value}},请检查"- name: daemonset_replicas_monitorrules:- alert: Daemonset监控expr: sum(kube_daemonset_status_desired_number_scheduled - kube_daemonset_status_current_number_scheduled)by(daemonset,namespace) >2for: 3mlabels:level: highservice: daemonsetannotations:summary: "集群:{{ $itor }},daemonset:{{$labels.daemonset}} 守护进程数未达到期望值!"description: "空间:{{$labels.namespace}},当前不可用副本:{{$value}},请检查"- name: satefulset_replicas_monitorrules:- alert: Satefulset监控expr: (kube_statefulset_replicas - kube_statefulset_status_replicas_ready) >2for: 3mlabels:level: highservice: statefulsetannotations:summary: "集群:{{ $itor }},statefulset:{{$labels.statefulset}} 副本数未达到期望值!"description: "空间:{{$labels.namespace}},当前不可用副本:{{$value}},请检查"- name: pvc_replicas_monitorrules:- alert: PVC监控expr: kube_persistentvolumeclaim_status_phase{phase!="Bound"} == 1for: 5mlabels:level: highservice: pvcannotations:summary: "集群:{{ $itor }},statefulset:{{$labels.persistentvolumeclaim}} 异常未bound成功!"description: "pvc出现异常"- name: K8sClusterJobrules: - alert: 集群JOB状态监控expr: sum(kube_job_status_failed{job="kubernetes-service-endpoints",k8s_app="kube-state-metrics"})by(job_name) ==1for: 1mlabels:level: disasterservice: jobannotations:summary: "集群存在执行失败的Job"description: "集群:{{ $itor }},名称:{{ $labels.job_name }}"- name: pod_container_cpu_resource_monitorrules:- alert: 容器内cpu占用监控expr: namespace:container_cpu_usage_seconds_total:sum_rate / sum(kube_pod_container_resource_limits_cpu_cores) by (monitor,namespace,pod_name)> 0.8for: 1mlabels:level: highservice: container_cpuannotations:summary: "集群:{{ $itor }} 出现Pod CPU使用率已经超过申请量的80%,"description: "namespace:{{$labels.namespace}}的pod:{{$labels.pod}},当前值为{{ $value }}"- alert: 容器内mem占用监控expr: namespace:container_memory_usage_bytes:sum/ sum(kube_pod_container_resource_limits_memory_bytes)by(monitor,namespace,pod_name) > 0.8for: 2mlabels:level: highservice: container_memannotations:summary: "集群:{{ $itor }} 出现Pod memory使用率已经超过申请量的90%"description: "namespace:{{$labels.namespace}}的pod:{{$labels.pod}},当前值为{{ $value }}"redis_rules.yaml: |+groups:- name: k8s_container_rulerules:- expr: sum(rate(container_cpu_usage_seconds_total[5m])) by (monitor,namespace,pod_name)record: namespace:container_cpu_usage_seconds_total:sum_rate- expr: sum(container_memory_usage_bytes{container_name="POD"}) by (monitor,namespace,pod_name)record: namespace:container_memory_usage_bytes:sum
注意:因为组件都在同一集群,我们采用DNS SRV的方式进行发现其他组件节点,其实对于容器内部的DNS SRV方便很多,我们只需要创建一个需要的Headless Service并且使用DNS SRV的话,设置clusterIP: None即可。
thanos-query-svc:
apiVersion: v1
kind: Service
metadata:labels:app: queryname: sidecar-query
spec:ports:- name: webport: 19090protocol: TCPtargetPort: 19090selector:app: query
thanos-rule-svc:
apiVersion: v1
kind: Service
metadata:labels:app: rulename: sidecar-rule
spec:clusterIP: Noneports:- name: webport: 10902protocol: TCPtargetPort: 10902- name: grpcport: 10901protocol: TCPtargetPort: 10901selector:app: rule
Prometheus+sidecar:
apiVersion: v1
kind: Service
metadata:labels:app: prometheusname: prometheus-sidecar-svc
spec:clusterIP: Noneports:- name: webport: 9090protocol: TCPtargetPort: 9090- name: grpcport: 10901protocol: TCPtargetPort: 10901selector:app: prometheus
效果图:
Pod指标监控多集群示例:
监控告警规则示例:
Thanos首页:
数据存储
对于Prometheus的数据存储我们也走了很多的弯路。
开始我们使用过InfluxDB最终因为集群版问题放弃了,也试过重写Prometheus-adapter接入OpenTSDB,后来因为部分通配符维护难问题也放弃了(其实还是tcollecter的搜集问题放弃的),我们也尝试过用Thanos-store S3打入Ceph因为副本问题成本太高,也打入过阿里云的OSS,存的多,但是取数据成了一个问题。后面我们迎来了VictoriaMetrics,能解决我们大部分的主要问题。
架构:
VictoriaMetrics本身是一个时序数据库,对于这样一个远端存储,同时也可以单独作为Prometheus数据源查询使用。
优势:
具有较高的压缩比和高性能
可以提供和Prometheus同等的数据源展示
支持MetricsQL同时查询时进行相同Meitrics数据聚合
开源的集群版本(简直无敌)
对于VictoriaMetrics我们做过一个简单的测试,相同的数据在和Prometheus原有数据对比中内存大概减少50%,CPU节省超40%,磁盘占用减少约40%,并且我们通过这种方式分离了写入和读取的通道避免了新老数据共存内存造成的大内存和OOM问题,也同时提供了一个长期数据存储的成本方案。
VictoriaMetrics部署:
vminsert部署:
apiVersion: apps/v1
kind: DaemonSet
metadata:name: monitor-vminsert
spec:revisionHistoryLimit: 10selector:matchLabels:vminsert: onlinetemplate:metadata:labels:vminsert: onlinespec:containers:- args:- -storageNode=vmstorage:8400image: victoriametrics/vminsert:v1.39.4-clusterimagePullPolicy: IfNotPresentname: vminsertports:- containerPort: 8480name: vminsertprotocol: TCPdnsPolicy: ClusterFirsthostNetwork: truenodeSelector:vminsert: onlinerestartPolicy: AlwaysupdateStrategy:rollingUpdate:maxUnavailable: 1type: RollingUpdate
vmselect部署:
apiVersion: apps/v1
kind: DaemonSet
metadata:name: monitor-vmselect
spec:revisionHistoryLimit: 10selector:matchLabels:vmselect: onlinetemplate:metadata:labels:vmselect: onlinespec:containers:- args:- -storageNode=vmstorage:8400image: victoriametrics/vmselect:v1.39.4-clusterimagePullPolicy: IfNotPresentname: vmselectports:- containerPort: 8481name: vmselectprotocol: TCPdnsPolicy: ClusterFirsthostNetwork: truenodeSelector:vmselect: onlinerestartPolicy: AlwaysupdateStrategy:rollingUpdate:maxUnavailable: 1type: RollingUpdate
vmstorage部署:
apiVersion: apps/v1
kind: StatefulSet
metadata:name: monitor-vmstorage
spec:replicas: 10serviceName: vmstoragerevisionHistoryLimit: 10selector:matchLabels:vmstorage: onlinetemplate:metadata:labels:vmstorage: onlinespec:containers:- args:- --retentionPeriod=1- --storageDataPath=/storageimage: victoriametrics/vmstorage:v1.39.4-clusterimagePullPolicy: IfNotPresentname: vmstorageports:- containerPort: 8482name: httpprotocol: TCP- containerPort: 8400name: vminsertprotocol: TCP- containerPort: 8401name: vmselectprotocol: TCPvolumeMounts:- mountPath: /dataname: datahostNetwork: truenodeSelector:vmstorage: onlinerestartPolicy: Alwaysvolumes:- hostPath:path: /data/vmstoragetype: ""name: data
vmstorage-svc(提供接口供查询、写入):
apiVersion: v1
kind: Service
metadata:labels:vmstorage: stagingname: vmstorage
spec:ports:- name: httpport: 8482protocol: TCPtargetPort: http- name: vmselectport: 8401protocol: TCPtargetPort: vmselect- name: vminsertport: 8400protocol: TCPtargetPort: vminsertselector:vmstorage: stagingtype: NodePort
vminsert-svc:
apiVersion: v1
kind: Service
metadata:labels:vminsert: onlinename: monitor-vminsert
spec:ports:- name: vminsertport: 8480protocol: TCPtargetPort: vminsertselector:vminsert: onlinetype: NodePort
vmselet-svc:
apiVersion: v1
kind: Service
metadata:labels:vmselect: onlinename: monitor-vmselect
spec:ports:- name: vmselectport: 8481protocol: TCPtargetPort: vmselectselector:vmselect: onlinetype: NodePort
进行部署完成后需要修改Prometheus配置进行写入和查询支持:
remote_write:- url: "vmstorage:8400/insert/0/prometheus/"remote_read:- url: "vmstorage:8401/select/0/prometheus"
Grafana数据源配置:
选择数据源类型:Prometheus
vmstorage:8401/select/0/prometheus
效果图:
告警信息
告警规则都是由thanos rule推送至Alertmanager。
告警采用Alertmanager进行告警,同时搭配自己的告警平台进行告警的分发。
在配置中我们按照alertname和monitor进行分组,可以实现相同alert name下的所有告警分成一个组,进行基于Prometheus的聚合告警,同时因为现网Pod较多,如发生大规模Pod异常进行聚合时数据较大,单独分类。效果如后面展示。
告警静默配置:因为现网告警都在label中定义了告警级别(warning、high、disaster)级别,对于最低级别的告警我们默认不走告警平台,根据告警的等级和告警规则进行静默。
例:
同monitor集群下某一个alertname按照instance进行静默
对于大量Pod告警我们基于Pod告警类型进行静默
第一次告警时会根据分组聚合信息进行所有告警信息推送。
Alertmanager配置:
global:smtp_smarthost: :25'smtp_from: 'xxxxxxx@xxxxxxx'smtp_auth_username: 'xxxxxxx@xxxxxxx'smtp_auth_password: 'xxxxxxx'smtp_require_tls: falseroute:group_by: ['alertname','pod','monitor']group_wait: 10sgroup_interval: 10srepeat_interval: 6hreceiver: 'webhook'routes:- receiver: 'mail'match:level: warningreceivers:
- name: 'mail'email_configs:- to: 'amend@xxxxx,amend2@xxxxx'send_resolved: true- name: 'webhook'webhook_configs:- url: ''send_resolved: true
inhibit_rules:- source_match:level: 'disaster'target_match_re:level: 'high|disaster'equal: ['alertname','instance','monitor']- source_match:level: 'high'target_match_re:level: 'high'equal: ['alertname','instance','monitor']
告警聚合代码示例(Python):
try:payload = quest.body)except json.decoder.JSONDecodeError:raise web.HTTPError(400)alert_row = payload['alerts']try:if len(alert_row) <2:description = alert_row[0]['annotations']['description']summary = alert_row[0]['annotations']['summary']else:for alert in alert_row:description += alert['annotations']['description'] + 'n'summary = '[聚合告警] '+ alert_row[0]['annotations']['summary']except:passtry:namespace = alert_row[0]['labels']['namespace']except:pass
效果:
对于Pod的监控:
对于instance级别告警:
对于业务级别告警:
源码和模板:
参考:
原文链接:/
Kubernetes实战培训
Kubernetes实战培训将于2020年12月25日在深圳开课,3天时间带你系统掌握Kubernetes,学习效果不好可以继续学习。本次培训包括:云原生介绍、微服务;Docker基础、Docker工作原理、镜像、网络、存储、数据卷、安全;Kubernetes架构、核心组件、常用对象、网络、存储、认证、服务发现、调度和服务质量保证、日志、监控、告警、Helm、实践案例等,点击下方图片或者阅读原文链接查看详情。
本文发布于:2024-02-04 06:11:23,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/170700741352982.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |