kubernetes应用的包管理工具——Helm的安装、部署、构建Helm Chart、图形化kubeapps的部署

阅读: 评论:0

kubernetes应用的包管理工具——Helm的安装、部署、构建Helm Chart、图形化kubeapps的部署

kubernetes应用的包管理工具——Helm的安装、部署、构建Helm Chart、图形化kubeapps的部署

文章目录

  • 1.Helm介绍与安装
    • 1.1 Helm介绍
    • 1.2 Helm安装
      • 1.安装Helm
      • 2.设置helm命令补齐
      • 3.Helm 添加第三方 Chart 库
  • 3. Helm 部署
  • 4.构建 Helm Chart
      • 1.构建一个 Helm Chart
      • 2.编写mychart的应用描述信息和编写应用部署信息
      • 3.检查依赖和模板配置是否正确
      • 4.将应用打包
      • 5.添加应用包到本地仓库
      • 6.部署仓库应用
  • 5.其他部署示例
    • 5.1 Helm部署metrics-server应用
    • 5.2 Helm部署nfs-client-provisioner应用
    • 5.3 部署kubeapps应用,为Helm提供web UI界面管理
      • 1.部署kubeapps
      • 2.kubeapps的登陆
      • 3.kubeapps的使用
    • 4.使用chart部署ingress-nginx应用

1.Helm介绍与安装

1.1 Helm介绍

Helm是Kubernetes 应用的包管理工具,主要用来管理 Charts,类似Linux系统的 yum。
Helm Chart 是用来封装 Kubernetes 原生应用程序的一系列 YAML 文件。可以在部署应用的时候自定义应用程序的一些 Metadata,以便于应用程序的分发。

• 对于应用发布者而言,可以通过 Helm 打包应用、管理应用依赖关系、管理应用 版本并发布应用到软件仓库。
• 对于使用者而言,使用 Helm 后不用需要编写复杂的应用部署文件,可以以简单 的方式在 Kubernetes 上查找、安装、升级、回滚、卸载应用程序。

Helm V3 与 V2 最大的区别在于去掉了tiller:


1.2 Helm安装

官网:/
下载软件包:helm-v3.1.

1.安装Helm

[kubeadm@server1 ~]$ mkdir helm
[kubeadm@server1 ~]$ cd helm/
[kubeadm@server1 helm]$ ls
helm-v3.2.
[kubeadm@server1 helm]$ tar zxf helm-v3.2.
[kubeadm@server1 helm]$ ls
helm-v3.2.  linux-amd64
[kubeadm@server1 helm]$ cd linux-amd64/
[kubeadm@server1 linux-amd64]$ ls
helm  LICENSE  README.md
[kubeadm@server1 linux-amd64]$ mv helm /usr/local/bin/
mv: cannot move ‘helm’ to ‘/usr/local/bin/helm’: Permission denied
[kubeadm@server1 linux-amd64]$ sudo mv helm /usr/local/bin/
[kubeadm@server1 linux-amd64]$ helm 
LICENSE    README.md  


2.设置helm命令补齐

echo "source <(helm completion bash)" >> ~/.bashrc

[kubeadm@server1 ~]$ echo "source <(helm completion bash)" >> ~/.bashrc
[kubeadm@server1 ~]$ source ~/.bashrc
[kubeadm@server1 ~]$ helm 
completion  env         install     package     repo        show        test        verify      
create      get         lint        plugin      rollback    status      uninstall   version     
dependency  history     list        pull        search      template    upgrade     


3.Helm 添加第三方 Chart 库

helm repo add stable /
helm repo add aliyun

[kubeadm@server1 ~]$ helm search repo
Error: no repositories configured
[kubeadm@server1 ~]$ helm repo add stable / 
"stable" has been added to your repositories
[kubeadm@server1 ~]$ helm repo list
NAME   URL                                      
stable /
[kubeadm@server1 ~]$ helm search repo redis
NAME                             CHART VERSION APP VERSION DESCRIPTION                             
stable/prometheus-redis-exporter 3.4.1         1.3.4       Prometheus exporter for Redis metrics 
stable/redis                     10.5.7        5.0.7       DEPRECATED Open source, advanced 
stable/redis-ha                  4.4.4         5.0.6       Highly available Kubernetes implementation 
stable/sensu                     0.2.3         0.28        Sensu monitoring framework backed by the Redis ...

3. Helm 部署

helm search repo redis 查询
helm show values stable/redis-ha 查看部署值

支持多种安装方式:(helm默认读取~/.kube/config信息连接k8s集群)
helm install redis-ha stable/redis-ha
helm install redis-ha redis-ha-4.
helm install redis-ha path/redis-ha
helm install redis-ha .

helm pull stable/redis-ha 拉取应用到本地
helm status redis-ha 查看状态
helm uninstall redis-ha 卸载

[kubeadm@server1 ~]$ cd helm/
[kubeadm@server1 helm]$ helm pull stable/redis-ha 
[kubeadm@server1 helm]$ ls
helm-v3.2.  linux-amd64  redis-ha-4.
[kubeadm@server1 helm]$ tar zxf redis-ha-4.
[kubeadm@server1 redis-ha]$ helm install redis-ha .
NAME: redis-ha
LAST DEPLOYED: Thu Jul  9 17:15:36 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
redis-ha.default.svc.cluster.local
To connect to your Redis server:
1. Run a Redis pod that you can use as a client:kubectl exec -it redis-ha-server-0 sh -n default
2. Connect using the Redis CLI:redis-cli -h redis-ha.default.svc.cluster.local
[kubeadm@server1 redis-ha]$ kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7cc765bb86-24bq5   1/1     Running   0          34m
php-apache-59b89c65c6-rf4s9               1/1     Running   0          51m
redis-ha-server-0                         2/2     Running   0          32s
redis-ha-server-1                         2/2     Running   0          25s
redis-ha-server-2                         2/2     Running   0          21s
[kubeadm@server1 redis-ha]$ kubectl get pvc
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
data-redis-ha-server-0   Bound    pvc-8db44c56-8957-4750-be81-ceee8d043c3d   10Gi       RWO            managed-nfs-storage   36s
data-redis-ha-server-1   Bound    pvc-3228e07f-86b1-4e37-95ca-4ce172fdced6   10Gi       RWO            managed-nfs-storage   29s
data-redis-ha-server-2   Bound    pvc-629f15bf-11ab-43c0-ada7-6dc1a6c41129   10Gi       RWO            managed-nfs-storage   25s







会轮询显示





实现主从复制

kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Defaulting container name to redis.
Use 'kubectl describe pod/redis-ha-server-0 -n default' to see all of the containers in this pod.
$ redis-cli -dis-ha
dis-ha:6379> set name redhat
OK
dis-ha:6379> get name
"redhat"
dis-ha:6379> 
$ redis-cli -dis-ha
dis-ha:6379> get name
"redhat"
dis-ha:6379> 
$ redis-cli -dis-ha
dis-ha:6379> get name
"redhat"


实现高可用



4.构建 Helm Chart

1.构建一个 Helm Chart

[kubeadm@server1 helm]$ helm create mycharm
Creating mycharm
[kubeadm@server1 helm]$ ls
helm-v3.2.  linux-amd64  mycharm  redis-ha  redis-ha-4.
[kubeadm@server1 helm]$ cd mycharm/
[kubeadm@server1 mycharm]$ ls
charts  Chart.yaml  templates  values.yaml
[kubeadm@server1 mycharm]$ tree .
.
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── 
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml


2.编写mychart的应用描述信息和编写应用部署信息

mychart的应用描述信息

应用部署信息


3.检查依赖和模板配置是否正确

[kubeadm@server1 helm]$ ls
helm-v3.2.  linux-amd64  mycharm  redis-ha  redis-ha-4.
[kubeadm@server1 helm]$ helm  lint mycharm/
==> Linting mycharm/
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, 0 chart(s) failed


4.将应用打包

[kubeadm@server1 helm]$ helm package mycharm/
Successfully packaged chart and saved it to: /home/kubeadm/helm/mycharm-0.
[kubeadm@server1 helm]$ ls
helm-v3.2.  linux-amd64  mycharm  mycharm-0.  redis-ha  redis-ha-4.
[kubeadm@server1 helm]$ du -h mycharm-0. 
4.0K mycharm-0.

5.添加应用包到本地仓库

[root@server1 ~]# cp /etc/docker/certs. /etc/pki/ca-trust/source/anchors/
[root@server1 ~]# update-ca-trust [kubeadm@server1 ~]$ helm repo add local  ##建立本地仓库
"local" has been added to your repositories
[kubeadm@server1 ~]$ helm repo list 
NAME   URL                                      
stable /
local       



[kubeadm@server1 ~]$ helm env 
HELM_BIN="helm"
HELM_DEBUG="false"
HELM_KUBEAPISERVER=""
HELM_KUBECONTEXT=""
HELM_KUBETOKEN=""
HELM_NAMESPACE="default"
HELM_PLUGINS="/home/kubeadm/.local/share/helm/plugins"
HELM_REGISTRY_CONFIG="/home/kubeadm/.config/helm/registry.json"
HELM_REPOSITORY_CACHE="/home/kubeadm/.cache/helm/repository"
HELM_REPOSITORY_CONFIG="/home/kubeadm/.config/helm/repositories.yaml"
[kubeadm@server1 ~]$ cd /home/kubeadm/.local/share/helm/plugins
-bash: cd: /home/kubeadm/.local/share/helm/plugins: No such file or directory
[kubeadm@server1 ~]$ mkdir -p /home/kubeadm/.local/share/helm/plugins ##建立存放插件的目录
[kubeadm@server1 ~]$ cd /home/kubeadm/.local/share/helm/plugins
[kubeadm@server1 plugins]$ ls
[kubeadm@server1 plugins]$ mkdir push 建立push插件目录
[kubeadm@server1 plugins]$ cd push/[kubeadm@server1 ~]$ cd helm/
[kubeadm@server1 helm]$ ls
helm-push_0.8.1_linux_  linux-amd64  mycharm-0.  redis-ha-4.
helm-v3.2.      mycharm      redis-ha
[kubeadm@server1 helm]$  tar zxf helm-push_0.8.1_linux_ -C /root/.local/share/helm/plugins/push
tar: /root/.local/share/helm/plugins/push: Cannot open: Permission denied
tar: Error is not recoverable: exiting now
[kubeadm@server1 helm]$ ls
helm-push_0.8.1_linux_  linux-amd64  mycharm-0.  redis-ha-4.
helm-v3.2.      mycharm      redis-ha
[kubeadm@server1 helm]$ tar zxf helm-push_0.8.1_linux_ -C /home/kubeadm/.local/share/helm/plugins/push
[kubeadm@server1 helm]$ ls /home/kubeadm/.local/share/helm/plugins/push/
bin  LICENSE  plugin.yaml
[kubeadm@server1 helm]$ helm push mycharm-0. local -u admin -p redhat
Pushing mycharm-0. 
Done.





[kubeadm@server1 mycharm]$ helm push . local -u admin -p redhat
Pushing mycharm-0. 
Done.[kubeadm@server1 mycharm]$ helm repo update  ## 更新仓库镜像
Hang tight while we grab the latest from your 
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
[kubeadm@server1 mycharm]$ helm search repo mycharm
NAME          CHART VERSION APP VERSION DESCRIPTION                
local/mycharm 0.2.0         1.16.0      A Helm chart for Kubernetes
[kubeadm@server1 mycharm]$ helm search repo mycharm
NAME          CHART VERSION APP VERSION DESCRIPTION                
local/mycharm 0.2.0         1.16.0      A Helm chart for Kubernetes
[kubeadm@server1 mycharm]$ helm search repo mycharm -l
NAME          CHART VERSION APP VERSION DESCRIPTION                
local/mycharm 0.2.0         1.16.0      A Helm chart for Kubernetes
local/mycharm 0.1.0         1.16.0      A Helm chart for Kubernetes



6.部署仓库应用

[kubeadm@server1 mycharm]$ helm install demo local/mycharm
NAME: demo
LAST DEPLOYED: Thu Jul  9 19:33:35 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mycharm,app.kubernetes.io/instance=demo" -o jsonpath="{.items[0].metadata.name}")echo "Visit 127.0.0.1:8080 to use your application"kubectl --namespace default port-forward $POD_NAME 8080:80
[kubeadm@server1 mycharm]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE    IP            NODE      NOMINATED NODE   READINESS GATES
demo-mycharm-856bb56c6b-bs5t7             1/1     Running   0          14s    10.244.1.36   server2   <none>           <none>
nfs-client-provisioner-7cc765bb86-24bq5   1/1     Running   0          172m   10.244.2.23   server3   <none>           <none>
php-apache-59b89c65c6-rf4s9               1/1     Running   0          3h9m   10.244.1.25   server2   <none>           <none>
[kubeadm@server1 mycharm]$ curl 10.244.1.36
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>


[kubeadm@server1 mycharm]$ helm install demo local/mycharm --version 0.1.0 ##指定应用版本
NAME: demo
LAST DEPLOYED: Thu Jul  9 19:38:44 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mycharm,app.kubernetes.io/instance=demo" -o jsonpath="{.items[0].metadata.name}")echo "Visit 127.0.0.1:8080 to use your application"kubectl --namespace default port-forward $POD_NAME 8080:80
[kubeadm@server1 mycharm]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
demo-mycharm-6c8779d4f9-zqxdb             1/1     Running   0          11s     10.244.1.37   server2   <none>           <none>
nfs-client-provisioner-7cc765bb86-24bq5   1/1     Running   0          177m    10.244.2.23   server3   <none>           <none>
php-apache-59b89c65c6-rf4s9               1/1     Running   0          3h14m   10.244.1.25   server2   <none>           <none>
[kubeadm@server1 mycharm]$ curl 10.244.1.37
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>


[kubeadm@server1 mycharm]$ helm upgrade demo local/mycharm ##更新版本
Release "demo" has been upgraded. Happy Helming!
NAME: demo
LAST DEPLOYED: Thu Jul  9 19:40:18 2020
NAMESPACE: default
STATUS: deployed
REVISION: 2
NOTES:
1. Get the application URL by running these commands:export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mycharm,app.kubernetes.io/instance=demo" -o jsonpath="{.items[0].metadata.name}")echo "Visit 127.0.0.1:8080 to use your application"kubectl --namespace default port-forward $POD_NAME 8080:80



[kubeadm@server1 mycharm]$ helm history demo
REVISION UPDATED                  STATUS     CHART         APP VERSION DESCRIPTION     
1        Thu Jul  9 19:38:44 2020 superseded mycharm-0.1.0 1.16.0      Install complete
2        Thu Jul  9 19:40:18 2020 deployed   mycharm-0.2.0 1.16.0      Upgrade complete
[kubeadm@server1 mycharm]$ helm rollback demo 1
Rollback was a success! Happy Helming!
[kubeadm@server1 mycharm]$ kubectl get pod -o wide
NAME                                      READY   STATUS        RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
demo-mycharm-6c8779d4f9-8rlx4             1/1     Running       0          6s      10.244.1.38   server2   <none>           <none>
demo-mycharm-856bb56c6b-5x9kv             0/1     Terminating   0          4m12s   10.244.2.27   server3   <none>           <none>
nfs-client-provisioner-7cc765bb86-24bq5   1/1     Running       0          3h3m    10.244.2.23   server3   <none>           <none>
php-apache-59b89c65c6-rf4s9               1/1     Running       0          3h19m   10.244.1.25   server2   <none>           <none>
[kubeadm@server1 mycharm]$ curl 10.244.1.38
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[kubeadm@server1 mycharm]$ helm uninstall demo
release "demo" uninstalled


5.其他部署示例

5.1 Helm部署metrics-server应用

[kubeadm@server1 ~]$ cd ms/
[kubeadm@server1 ms]$ ls
components.yaml
[kubeadm@server1 ms]$ kubectl delete -f components.yaml ##删除之前的metrics-server
clusterrole.rbac.authorization.k8s.io "system:aggregated-metrics-reader" deleted
clusterrolebinding.rbac.authorization.k8s.io "metrics-server:system:auth-delegator" deleted
rolebinding.rbac.authorization.k8s.io "metrics-server-auth-reader" deleted
apiservice.apiregistration.k8s.io &#ics.k8s.io" deleted
serviceaccount "metrics-server" deleted
deployment.apps "metrics-server" deleted
service "metrics-server" deleted
clusterrole.rbac.authorization.k8s.io "system:metrics-server" deleted
clusterrolebinding.rbac.authorization.k8s.io "system:metrics-server" deleted

[kubeadm@server1 ~]$ cd helm/
[kubeadm@server1 helm]$ helm search repo metrics-server
NAME                  CHART VERSION APP VERSION DESCRIPTION                                       
stable/metrics-server 2.11.1        0.3.6       Metrics Server is a cluster-wide aggregator of ...
[kubeadm@server1 helm]$ helm pull stable/metrics-server
Error: Get .: dial tcp: lookup mirror.azure on 114.114.114.114:53: read udp 192.168.43.11:35208->114.114.114.114:53: i/o timeout
[kubeadm@server1 helm]$ helm pull stable/metrics-server
[kubeadm@server1 helm]$ ls
helm-push_0.8.1_linux_  linux-amd64                mycharm            redis-ha
helm-v3.2.      metrics-server-2.  mycharm-0.  redis-ha-4.
[kubeadm@server1 helm]$ tar zxf metrics-server-2.
[kubeadm@server1 metrics-server]$ ls
Chart.yaml  ci  README.md  templates  values.yaml
[kubeadm@server1 metrics-server]$ vim values.yaml 
[kubeadm@server1 metrics-server]$  kubectl create namespace metrics-server
namespace/metrics-server created
[kubeadm@server1 metrics-server]$ helm install metrics-server . -n metrics-server
NAME: metrics-server
LAST DEPLOYED: Thu Jul  9 23:48:04 2020
NAMESPACE: metrics-server
STATUS: deployed
REVISION: 1
NOTES:
The metric server has been deployed. 
In a few minutes you should be able to list metrics using the following
command:kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
[kubeadm@server1 metrics-server]$ kubectl get pod -n metrics-server 
NAME                              READY   STATUS    RESTARTS   AGE
metrics-server-778fccc67f-vdkcn   1/1     Running   0          50s
[kubeadm@server1 metrics-server]$ kubectl get all -n metrics-server 
NAME                                  READY   STATUS    RESTARTS   AGE
pod/metrics-server-778fccc67f-vdkcn   1/1     Running   0          62s
NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/metrics-server   ClusterIP   10.99.155.243   <none>        443/TCP   62s
NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/metrics-server   1/1     1            1           62s
NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/metrics-server-778fccc67f   1         1         1       62s
[kubeadm@server1 metrics-server]$ kubectl top pod -n metrics-server 
NAME                              CPU(cores)   MEMORY(bytes)   
metrics-server-778fccc67f-vdkcn   2m           11Mi         
[kubeadm@server1 metrics-server]$ kubectl top node -n metrics-server 
NAME      CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
server1   85m          4%     1054Mi          28%       
server2   29m          1%     430Mi           25%       
server3   29m          1%     415Mi           24%       





5.2 Helm部署nfs-client-provisioner应用

kubeadm@server1 ~]$ kubectl delete pvc --all
persistentvolumeclaim "data-redis-ha-server-0" deleted
persistentvolumeclaim "data-redis-ha-server-1" deleted
persistentvolumeclaim "data-redis-ha-server-2" deleted
[kubeadm@server1 ~]$ cd vol/nfs-client/
[kubeadm@server1 nfs-client]$ ls
l  l  l  l  l
[kubeadm@server1 nfs-client]$ kubectl delete -f .
storageclass.storage.k8s.io "managed-nfs-storage" deleted
deployment.apps "nfs-client-provisioner" deleted
serviceaccount "nfs-client-provisioner" deleted
clusterrole.rbac.authorization.k8s.io "nfs-client-provisioner-runner" deleted
clusterrolebinding.rbac.authorization.k8s.io "run-nfs-client-provisioner" deleted
role.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" deleted
rolebinding.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" deleted
Error from server (NotFound): error when deleting &#l": pods "test-pod" not found
Error from server (NotFound): error when deleting &#l": persistentvolumeclaims "pvc1" not found

[kubeadm@server1 helm]$ kubectl create namespace nfs-client-provisioner
namespace/nfs-client-provisioner created
[kubeadm@server1 helm]$ kubectl get namespaces 
NAME                     STATUS   AGE
default                  Active   2d21h
kube-node-lease          Active   2d21h
kube-public              Active   2d21h
kube-system              Active   2d21h
kubernetes-dashboard     Active   38h
metrics-server           Active   17m
nfs-client-provisioner   Active   2s
[kubeadm@server1 helm]$ kubectl get storageclasses.storage.k8s.io
No resources found in default namespace.
[kubeadm@server1 helm]$ kubectl get pvc
No resources found in default namespace.
[kubeadm@server1 helm]$ kubectl get pv
No resources found in default namespace.
[kubeadm@server1 helm]$ helm search repo nfs-client
NAME                          CHART VERSION APP VERSION DESCRIPTION                                       
stable/nfs-client-provisioner 1.2.8         3.1.0       nfs-client is an automatic provisioner 
[kubeadm@server1 helm]$ helm pull stable/nfs-client-provisioner ##拉取包
[kubeadm@server1 helm]$ ls
helm-push_0.8.1_linux_  linux-amd64     metrics-server-2.  mycharm-0.                 redis-ha
helm-v3.2.      metrics-server  mycharm                    nfs-client-provisioner-1.  redis-ha-4.[kubeadm@server1 helm]$ tar zxf nfs-client-provisioner-1.##解压包
[kubeadm@server1 helm]$ ls
helm-push_0.8.1_linux_  metrics-server             mycharm-0.                 redis-ha
helm-v3.2.      metrics-server-2.  nfs-client-provisioner            redis-ha-4.
linux-amd64                         mycharm                    nfs-client-provisioner-1.
[kubeadm@server1 nfs-client-provisioner]$ vim values.yaml 
[kubeadm@server1 nfs-client-provisioner]$ helm install nfs-client-provisioner . -n nfs-client-provisioner
NAME: nfs-client-provisioner
LAST DEPLOYED: Fri Jul 10 00:14:20 2020
NAMESPACE: nfs-client-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None
[kubeadm@server1 nfs-client-provisioner]$ helm list -n nfs-client-provisioner 
NAME                   NAMESPACE              REVISION UPDATED                                 STATUS   CHART                        APP VERSION
nfs-client-provisioner nfs-client-provisioner 1        2020-07-10 00:14:20.975587512 +0800 CST deployed nfs-client-provisioner-1.2.8 3.1.0      
[kubeadm@server1 nfs-client-provisioner]$ helm status nfs-client-provisioner -n nfs-client-provisioner
NAME: nfs-client-provisioner
LAST DEPLOYED: Fri Jul 10 00:14:20 2020
NAMESPACE: nfs-client-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None
[kubeadm@server1 nfs-client-provisioner]$ kubectl get all -n nfs-client-provisioner 
NAME                                          READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-6d694ddcbb-knlpv   1/1     Running   0          61s
NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-client-provisioner   1/1     1            1           61s
NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-client-provisioner-6d694ddcbb   1         1         1       61s
[kubeadm@server1 nfs-client-provisioner]$ kubectl get storageclasses.storage.k8s.io 
NAME                   PROVISIONER                            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client (default)   cluster.local/nfs-client-provisioner   Delete          Immediate           true                   73s






[kubeadm@server1 nfs-client]$ l 
[kubeadm@server1 nfs-client]$ l 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: pvc1annotations:volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:accessModes:- ReadWriteManyresources:requests:storage: 10Gi
[kubeadm@server1 nfs-client]$ kubectl apply -l 
persistentvolumeclaim/pvc1 created
[kubeadm@server1 nfs-client]$ kubectl get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc1   Bound    pvc-cab6970f-a1f7-433d-a3d5-f2b50a394341   10Gi       RWX            nfs-client     3s
[kubeadm@server1 nfs-client]$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS   REASON   AGE
pvc-cab6970f-a1f7-433d-a3d5-f2b50a394341   10Gi       RWX            Delete           Bound    default/pvc1   nfs-client              28s
[kubeadm@server1 nfs-client]$ kubectl delete -l 
persistentvolumeclaim "pvc1" deleted


5.3 部署kubeapps应用,为Helm提供web UI界面管理

参考官网:.md

1.部署kubeapps

1.解压包

[kubeadm@server1 helm]$ ls
helm-push_0.8.1_linux_  linux-amd64     metrics-server-2.  nfs-client-provisioner            redis-ha-4.
helm-v3.2.      manifest.json   mycharm                    nfs-client-provisioner-1.  repositories
kubeapps-3.                  metrics-server  mycharm-0.          redis-ha
[kubeadm@server1 helm]$ tar zxf kubeapps-3.
[kubeadm@server1 helm]$ ls
helm-push_0.8.1_linux_  kubeapps-3.  metrics-server             mycharm-0.                 redis-ha
helm-v3.2.      linux-amd64         metrics-server-2.  nfs-client-provisioner            redis-ha-4.
kubeapps                            manifest.json       mycharm                    nfs-client-provisioner-1.  repositories
[kubeadm@server1 helm]$ cd kubeapps/
[kubeadm@server1 kubeapps]$ ls
charts  Chart.yaml  crds  README.md  requirements.lock  requirements.yaml  templates  values.schema.json  values.yaml
[kubeadm@server1 kubeapps]$ cat requirements.yaml 
dependencies:- name: mongodbversion: "> 7.10.2"repository: : abled- name: postgresqlversion: ">= 0"repository: : abled
[kubeadm@server1 kubeapps]$ cd charts/
[kubeadm@server1 charts]$ ls
mongodb  postgresql


**
2.安装应用

[kubeadm@server1 kubeapps]$ ls
charts  Chart.yaml  crds  README.md  requirements.lock  requirements.yaml  templates  values.schema.json  values.yaml
[kubeadm@server1 kubeapps]$ vim values.yaml 
[kubeadm@server1 kubeapps]$ kubectl create namespace kubeapps
namespace/kubeapps created
[kubeadm@server1 kubeapps]$ cd charts/
[kubeadm@server1 charts]$ ls
mongodb  postgresql
[kubeadm@server1 charts]$ cd postgresql/
[kubeadm@server1 postgresql]$ ls
Chart.yaml  ci  files  README.md  templates  values-production.yaml  values.schema.json  values.yaml
[kubeadm@server1 postgresql]$ vim values.yaml 
[kubeadm@server1 postgresql]$ cd /home/kubeadm/helm/
[kubeadm@server1 helm]$ cd kubeapps/
[kubeadm@server1 kubeapps]$ ls
charts  Chart.yaml  crds  README.md  requirements.lock  requirements.yaml  templates  values.schema.json  values.yaml
[kubeadm@server1 kubeapps]$ helm install kubeapps . -n kubeapps 
NAME: kubeapps
LAST DEPLOYED: Fri Jul 10 02:10:02 2020
NAMESPACE: kubeapps
STATUS: deployed
REVISION: 1
NOTES:
** Please be patient while the chart is being deployed **
Tip:Watch the deployment status using the command: kubectl get pods -w --namespace kubeapps
Kubeapps can be accessed via port 80 on the following DNS name from within your cluster:kubeapps.kubeapps.svc.cluster.local
To access Kubeapps from outside your K8s cluster, follow the steps below:
1. Get the Kubeapps URL and associate Kubeapps hostname to your cluster external IP:export CLUSTER_IP=$(minikube ip) # On Minikube. Use: `kubectl cluster-info` on others K8s clustersecho "Kubeapps URL: /"echo "$CLUSTER_IP  d" | sudo tee -a /etc/hosts
2. Open a browser and access Kubeapps using the obtained URL.









2.kubeapps的登陆

需要token登陆,因此我们需要创建sa并为其附加cluster-admin的权限

[kubeadm@server1 kubeapps]$ kubectl create serviceaccount kubeapps-operator -n kubeapps 
serviceaccount/kubeapps-operator created
[kubeadm@server1 kubeapps]$ kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=kubeapps:kubeapps-operator
clusterrolebinding.rbac.authorization.k8s.io/kubeapps-operator created
[kubeadm@server1 kubeapps]$ kubectl get clusterrolebindings.rbac.authorization.k8s.io | grep kubeapps
kubeapps-operator                                      ClusterRole/cluster-admin                                                          32s
kubeapps:controller:kubeapps:apprepositories-read      ClusterRole/kubeapps:kubeapps:apprepositories-read                                 168m
kubeapps:controller:kubeops-ns-discovery-kubeapps      ClusterRole/kubeapps:controller:kubeops-ns-discovery-kubeapps                      168m
[kubeadm@server1 kubeapps]$ kubectl get secrets -n kubeapps 
NAME                                                          TYPE                                  DATA   AGE
default-token-zjckh                                           kubernetes.io/service-account-token   3      173m
kubeapps-db                                                   Opaque                                2      168m
kubeapps-internal-apprepository-controller-token-6hq2r        kubernetes.io/service-account-token   3      168m
kubeapps-internal-apprepository-job-postupgrade-token-svxbg   kubernetes.io/service-account-token   3      168m
kubeapps-internal-kubeops-token-x867c                         kubernetes.io/service-account-token   3      168m
kubeapps-operator-token-xhl77                                 kubernetes.io/service-account-token   3      4m32s
lease.v1.kubeapps.v1                                helm.sh/release.v1                    1      168m
[kubeadm@server1 kubeapps]$ kubectl -n kubeapps describe secrets kubeapps-operator-token-xhl77 
Name:         kubeapps-operator-token-xhl77
Namespace:    kubeapps
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubeapps-operatorkubernetes.io/service-account.uid: c110c87e-3b35-41f1-bc49-9c07f20718e0
Type:  kubernetes.io/service-account-token
Data
====
ca.crt:     1025 bytes
namespace:  8 bytes
token:      Jpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlYXBwcyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlYXBwcy1vcGVyYXRvci10b2tlbi14aGw3NyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlYXBwcy1vcGVyYXRvciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImMxMTBjODdlLTNiMzUtNDFmMS1iYzQ5LTljMDdmMjA3MThlMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlYXBwczprdWJlYXBwcy1vcGVyYXRvciJ9.oF6NaGQdTEDppA7ud8QYgyghFgeLsRlUeMEFMwIw79bmBC7yyQjRGwublDTQgblkMv1WDdMAyl7dLZr5Gqx6RL3EeQHldFsIpzUBnAXTzjXClhJIjNzXHb-2S7e91U4MVKb5QhEkhp5NeyuA-0vxEleSqc6jflcPb4mBxIfYE68_Nvi87b1-iGBb3sGHc9YlVy34QSxrUe07Q_GHZidwIt8H_-ZDf4BxMAlp1LGd1mIH8bCta8Y1Jm2_RyMHqvkt4K80VbVdpkIGHIEk7wsce_p53Xb1iBBeP-bf4pRWkOy6k3qRZrGgu1tW4HVhoO9-5ou8zI4JtMlir45in8ZBjw





3.kubeapps的使用



手动添加仓库



[kubeadm@server1 kubeapps]$ kubectl -n kube-system get cm
NAME                                 DATA   AGE
coredns                              1      3d3h
extension-apiserver-authentication   6      3d3h
kube-flannel-cfg                     2      2d7h
kube-proxy                           2      3d3h
kubeadm-config                       2      3d3h
kubelet-config-1.18                  1      3d3h
[kubeadm@server1 kubeapps]$ kubectl -n kube-system edit cm coredns 
configmap/coredns edited















4.使用chart部署ingress-nginx应用




本文发布于:2024-02-02 18:45:25,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170687072545722.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23