Heapster是一个收集者,将每个Node上的cAdvisor的数据进行汇总,然后导到第三方工具(如InfluxDB)。
Heapster 是通过调用 kubelet 的 http API 来获取 cAdvisor 的 metrics 数据的。
由于 kublet 只在 10250 端口接收 https 请求,故需要修改 heapster 的 deployment 配置。同时,需要赋予 kube-system:heapster ServiceAccount 调用 kubelet API 的权限。
到 heapster release 页面 下载最新版本的 heapster
wget .5.
tar -xzvf v1.5.
mv v1.5. heapster-1.5.
官方文件目录: heapster-1.5.3/deploy/kube-config/influxdb
$ cd heapster-1.5.3/deploy/kube-config/influxdb
$ cp grafana.yaml{,.orig}
$ diff ig grafana.yaml
16c16
< image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
---
> image: wanghkkk/heapster-grafana-amd64-v4.4.3:v4.4.3
67c67
< # type: NodePort
---
> type: NodePort
$ cp heapster.yaml{,.orig}
$ diff ig heapster.yaml
23c23
< image: gcr.io/google_containers/heapster-amd64:v1.5.3
---
> image: fishchen/heapster-amd64:v1.5.3
27c27
< - --source=kubernetes:
---
> - --source=kubernetes:=true&kubeletPort=10250
$ cp influxdb.yaml{,.orig}
$ diff ig influxdb.yaml
16c16
< image: gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
---
> image: fishchen/heapster-influxdb-amd64:v1.3.3
$ pwd
/opt/k8s/heapster-1.5.2/deploy/kube-config/influxdb
$ ls *.yaml
grafana.yaml heapster.yaml influxdb.yaml
$ kubectl create -f .$ cd ../rbac/
$ pwd
/opt/k8s/heapster-1.5.2/deploy/kube-config/rbac
$ ls
heapster-rbac.yaml
$ cp heapster-rbac.yaml{,.orig}
$ diff ig heapster-rbac.yaml
12a13,26
> ---
> kind: ClusterRoleBinding
> apiVersion: rbac.authorization.k8s.io/v1beta1 > metadata: > name: heapster-kubelet-api > roleRef: > apiGroup: rbac.authorization.k8s.io > kind: ClusterRole > name: system:kubelet-api-admin > subjects: > - kind: ServiceAccount > name: heapster > namespace: kube-system > $ kubectl create -f heapster-rbac.yaml
$ kubectl get pods -n kube-system | grep -E 'heapster|monitoring' heapster-ddb6c4994-vnnrn 1/1 Running 0 1m monitoring-grafana-779bd4dd7b-xqkgk 1/1 Running 0 1m monitoring-influxdb-f75847d48-2lnz6 1/1 Running 0 1m
检查 kubernets dashboard 界面,可以正确显示各 Nodes、Pods 的 CPU、内存、负载等统计数据和图表:
通过 kube-apiserver 访问:
获取 monitoring-grafana 服务 URL:
$ kubectl cluster-infoKubernetes master is running at 192.168.1.106:6443CoreDNS is running at 192.168.1.106:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxyHeapster is running at 192.168.1.106:6443/api/v1/namespaces/kube-system/services/heapster/proxy kubernetes-dashboard is running at 192.168.1.106:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy monitoring-grafana is running at 192.168.1.106:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy monitoring-influxdb is running at 192.168.1.106:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
浏览器访问 URL: 192.168.1.106:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
对于 virtuabox 做了端口映射: 127.0.0.1:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
通过 kubectl proxy 访问:
创建代理
kubectl proxy --address='192.168.1.106' --port=8086 --accept-hosts='^*$'Starting to serve on 172.27.129.80:8086
浏览器访问 URL:192.168.1.106:8086/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/?orgId=1
对于 virtuabox 做了端口映射: 127.0.0.1:8086/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/?orgId=1
通过 NodePort 访问:
$ kubectl get svc -n kube-system|grep -E 'monitoring|heapster'heapster ClusterIP 10.254.58.136 <none> 80/TCP 47m monitoring-grafana NodePort 10.254.28.196 <none> 80:8452/TCP 47m monitoring-influxdb ClusterIP 10.254.138.164 <none> 8086/TCP 47m
grafana 监听 NodePort 8452;
浏览器访问 URL:192.168.1.106:8452/?orgId=1
参考:
转载于:.html
本文发布于:2024-01-28 13:58:54,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/17064215367913.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |