centos7.6使用kubeadm安装kubenetes 1.16.0

阅读: 评论:0

centos7.6使用kubeadm安装kubenetes 1.16.0

centos7.6使用kubeadm安装kubenetes 1.16.0

参考:

.html

kubenetes v1.16.0

centos 7.6

docker-ce 19.03.2

 

一、服务器相关设置

系统环境
系统:centos 7.6
内核: 5.2.11-1.el7.elrepo.x86_64
1、Linux查看版本当前操作系统内核信息

uname -a

输出

Linux iZwz9d75c59ll4waz4y8ctZ 3.10.0-693.2.2.el7.x86_64 #1 SMP Tue Sep 12 22:26:13 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


2、Linux查看当前操作系统版本信息

cat /proc/version

输出

Linux version 3.10.0-693.2.2.el7.x86_64 (builder@s) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Sep 12 22:26:13 UTC 2017


Linux查看版本当前操作系统发行版信息

cat /etc/redhat-release

输出

CentOS Linux release 7.4.1708 (Core)


升级内核

# 升级yum,升级系统内核,系统内核最好能是4.10以上,可以使用overlay2存储驱动
# 本人在自己虚拟机使用centos7.6,升级内核后,可以用到overlay2存储驱动
# 在服务器上由于某些原因没办法升级内核,在3.10版本也能部署成功,只是存储驱动使用overlay
yum update -y# 安装工具
yum install -y wget curl vim 

系统配置
设置主机名

hostnamectl set-hostname master


必须设置域名解析(master、node都要设置)

cat <<EOF >>/etc/hosts192.168.43.88 k8s-master
192.168.43.39 k8s-node1EOF


关闭防火墙 、selinux和swap

systemctl disable firewalld --nowsetenforce 0sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/configswapoff -a
echo "vm.swappiness = 0">> /fsed -i 's/.*swap.*/#&/' /etc/fstabsysctl -p


配置内核参数,将桥接的IPv4流量传递到iptables的链
 

cat > /etc/sysctl.f << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOFsysctl --system

二、配置软件源

1、配置yum源base repo为阿里云的yum源

cd /pos.d
po.bak
po  po.bak
curl .repo -po 
sed -i 's/gpgcheck=1/gpgcheck=0/g' /pos.po
curl .repo -po 

gpkcheck=0 表示对从这个源下载的rpm包不进行校验

2、配置docker repo

wget .repo -O/pos.po

3、配置kubernetes源为阿里的yum源

cat > /pos.po << EOF
[kubernetes]
name=Kubernetes
baseurl=
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=.gpg .gpg
EOF

baseurl=   Kubernetes源设为阿里

gpgcheck=0表示对从这个源下载的rpm包不进行校验

repo_gpgcheck=0:某些安全性配置文件会在 /f 内全面启用 repo_gpgcheck,以便能检验软件库的中继数据的加密签署

如果gpgcheck设为1,会进行校验,就会报错如下,所以这里设为0

4、update cache 更新缓存

yum clean all && yum makecache && yum repolist

三、安装docker

查看可安装的版本

yum list docker-ce --showduplicates | sort -r

安装

yum install -y docker-ce

或者指定版本安装

yum install -y docker-ce-19.03.5-3.el7

启动docker

systemctl enable docker && systemctl start docker

查看docker版本

docker version

设置docker的镜像源,这里我设置为国内的docker源

vim /etc/docker/daemon.json
{"exec-opts": [&#updriver=systemd"],"log-driver": "json-file","log-opts": {"max-file": "3","max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"],"registry-mirrors": [""]
}
注意&#updriver=systemd 官方推荐此配置,地址 /

重新加载daemon,重启docker

systemctl daemon-reload && systemctl restart docker
如果docker重启失败提示start request repeated too quickly for docker.service应该是centos的内核太低内核是3.x,不支持overlay2的文件系统,移除下面的配置"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"]

四、安装kubeadm、kubelet和kubectl

kubeadm不管kubelet和kubectl,所以我们安装kubeadm,还要安装kubelet和kubectl

查看kubelet版本

yum list kubelet --showduplicates | sort -r

安装kubelet kubeadm kubectl

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。

Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。

Kubectl是Kubernetes集群管理工具。

最后启动kubelet:

systemctl enable kubelet --now

五、部署kubernetes master

开始初始化集群之前可以使用kubeadm config images list查看一下初始化需要哪些镜像

kubeadm config images list

以下是初始化所需docker镜像

可以先通过kubeadm config images pull手动在各个节点上拉取所k8s需要的docker镜像,master节点初始化或者node节点加入集群时,会用到这些镜像

如果不先执行kubeadm config images pull拉取镜像,其实在master节点执行kubeadm init 或者node节点执行 kubeadm join命令时,也会先拉取镜像

本人没有提前拉取镜像,都是在master节点kubeadm init 和node节点 kubeadm join时,自动拉的镜像

kubeadm config images pull

1、初始化kubeadm

kubeadm init 
--apiserver-advertise-address=master机器的ip 
--image-repository registry.aliyuncs/google_containers 
--kubernetes-version v1.16.0 
--service-cidr=10.1.0.0/16 
--pod-network-cidr=10.244.0.0/16

上面指定的参数都不能缺

–kubernetes-version: 用于指定k8s版本;
–apiserver-advertise-address:用于指定kube-apiserver监听的ip地址,就是 master本机IP地址。
–pod-network-cidr:用于指定Pod的网络范围; 10.244.0.0/16
–service-cidr:用于指定SVC的网络范围;
–image-repository: 指定阿里云镜像仓库地址

这一步很关键,由于kubeadm 默认从官网io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址

注意:初始化时,留意输出的英文提示,可以及时知道初始化失败的原因,如下warning就是说主机名不对,找不到这个主机

[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09[WARNING Hostname]: hostname "bogon" could not be reached[WARNING Hostname]: hostname "bogon": lookup bogon on 192.168.2.1:53: no such host[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

初始化时间比较长一些,大概1分钟左右,当出现以下结果,标识初始化完成(初始化完成,不代表集群运行正常)

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/f $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.43.88:6443 --token m16ado.6ne248sk47nln0jj --discovery-token-ca-cert-hash sha256:09cda974fb18e716219bf08ef9d7a4eaa76bfe59ec91d0930b4ccfbd111276de

2、按提示执行命令

  mkdir -p $HOME/.kubesudo cp -i /etc/f $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

3、将pod网络部署到集群

下载l文件

curl -O .yml

下载后的文件

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unsed in CaaSPrule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
rules:- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"cniVersion": "0.2.0","name": "cbr0","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-amd64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- amd64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-amd64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.flistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgr- --iface=eth0resources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-arm64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- arm64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-arm64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.flistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-arm64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-armnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- armhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-armcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.flistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-armcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-ppc64lenamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- ppc64lehostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-ppc64lecommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.flistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-ppc64lecommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-s390xnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/osoperator: Invalues:- linux- key: beta.kubernetes.io/archoperator: Invalues:- s390xhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.11.0-s390xcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.flistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-s390xcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

安装flannel network add-on

kubectl apply -f ./l
这里注意l这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64

执行后,显示如下,表示flannel部署成功

podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

如果flannel创建失败的,则可能是服务器存在多个网卡,目前需要在l中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将l下载到本地,flanneld启动参数加上–iface=<iface-name>

注意&#l文件里面有多个containers,要在quay.io/coreos/flannel:v0.11.0-amd64这个containers的启动参数里面设

containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.11.0-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgr- --iface=eth0

containers下的args指定参数iface=eth0,前提先ifconfig查看一下本地ip使用的网卡名称:eth0

修改之前下载的l,修改containers下的args启动参数,指定网卡名称

修改完l后,先删除之前部署的网络

kubectl delete -f ./l

再重新创建

kubectl apply -f ./l

 

执行完后,查看node状态、pod状态

kubectl get nodes
NAME                      STATUS   ROLES    AGE   VERSION
izwz9d75c59ll4waz4y8ctz   Ready    master   71m   v1.16.0
kubectl get pod -n kube-system
NAME                                              READY   STATUS    RESTARTS   AGE
coredns-58cc8c89f4-2d6q9                          1/1     Running   0          72m
coredns-58cc8c89f4-rxs6n                          1/1     Running   0          72m
etcd-izwz9d75c59ll4waz4y8ctz                      1/1     Running   0          71m
kube-apiserver-izwz9d75c59ll4waz4y8ctz            1/1     Running   0          71m
kube-controller-manager-izwz9d75c59ll4waz4y8ctz   1/1     Running   0          71m
kube-flannel-ds-amd64-55fzx                       1/1     Running   0          71m
kube-proxy-5x9wv                                  1/1     Running   0          72m
kube-scheduler-izwz9d75c59ll4waz4y8ctz            1/1     Running   0          71m

一般node状态为NotReady状态,kubectl get pod -n kube-system看一下pod状态,

一般可以发现问题,flannel镜像下载失败ImagePullBackOff,coredns状态一直是pending,

这时查看一下docker image ls里面有没有quay.io/coreos/flannel:v0.11.0-amd64镜像,如果没有,尝试

docker pull quay.io/coreos/flannel:v0.11.0-amd64

必须把flannel镜像拉取下,kube-flannel-ds-amd64的状态才正常running

本人虚拟机里面死活没办法拉取quay.io/coreos/flannel:v0.11.0-amd64镜像,只好从其他地方拉取后save导出,再load到虚拟机的docker里面

注意:master节点和node节点,都必须有这个镜像quay.io/coreos/flannel:v0.11.0-amd64

 

六、部署Node节点加入集群

1、首先确保node节点是否存在flannel的docker镜像:quay.io/coreos/flannel:v0.11.0-amd64

2、执行kubeadm join命令加入集群

kubeadm join 192.168.43.88:6443 --token ep9bne.6at6gds2o05dgutd --discovery-token-ca-cert-hash sha256:b2f75a6e5a49e66e467392d7d237548664ba8a28aafe98bdb18a7dd63ecc4aa8 

到master节点查看node状态,都显示ready

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   3h31m   v1.16.0
k8s-node1    Ready    <none>   57m     v1.16.0

七、其他

1、创建pod验证集群是否正常

kubectl create deployment nginx --image=nginxkubectl expose deployment nginx --port=80 --type=NodePortkubectl get pod,svc

查看pod和svc,nginx正常ready

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-86c57db685-kgfn2   1/1     Running   0          67sNAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1       <none>        443/TCP        3h49m
service/nginx        NodePort    10.1.143.205   <none>        80:32597/TCP   12s

浏览器访问master主机ip+NodePort端口32597,能正常显示nginx首页

2、移除Node节点

在node节点执行

kubeadm reset

撤销kubeadm join,再手动rm掉提示的配置文件夹

在master节点执行(kubectl get node可以查看节点名)   kubectl delete node 节点名称

kubectl delete node k8s-node1

3、设置docker私库

另外一个文章:

本文发布于:2024-02-04 15:21:33,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170710419956662.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

上一篇:BOG(Bag
下一篇:Z
标签:kubeadm   kubenetes
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23