手工部署以“进程”方式运行
用 kubeadm 部署以“容器”方式运行
手工部署为以“容器”方式运行
CPU:Intel i7
内存:16 GB
操作系统:64 位 Windows 10 专业版
网络:可以无限制的访问国内互联网
虚机软件:VirtualBox 6.0.8
虚机操作系统镜像:CentOS-7-x86_64-DVD-1810.iso
网络名称:nat30
网络CIDR:30.0.2.0/24
支持 DHCP:勾选
支持 IPv6:勾选
名称:(自定义)
类型:Linux
版本:Red Hat(64-bit)
内存大小:1024 MB
虚拟磁盘:现在创建虚拟硬盘(默认)
虚拟硬盘文件类型:VDI(VirtualBox 磁盘映像)
存储在物理硬盘上:动态分配
选中虚拟机(新创建的)>> 控制 >> 设置 >> 系统 >> 处理器
处理器数量:2
选中虚拟机(新创建的)>> 控制 >> 设置 >> 网络 >> 网卡 1 >>
启用网络连接:勾选
连接方式:NAT 网络
界面名称:nat30
选中虚拟机
控制 >> 设置 >> 存储 >> 控制器:IDE >> 没有盘片
分配光驱:第二 IDE 控制器主通道 >> 圆形图标 >> 选择一个虚拟光盘文件 >> 找到 CentOS-7-x86_64-DVD-1810.iso 文件
选中虚拟机(新创建的)>> 启动
中文、简体中文(中国)
最小安装
没有说明的选项均为默认值
sed -i "s/ONBOOT=no/ONBOOT=yes/g" /etc/sysconfig/network-scripts/ifcfg-enp0s3
service network restart
systemctl stop firewalld
systemctl disable firewalld
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/#UseDNS yes/UseDNS no/g" /etc/ssh/sshd_config
hip=30.0.2.11
hip=30.0.2.11
sed -i "s/BOOTPROTO=dhcp/BOOTPROTO=static/g" /etc/sysconfig/network-scripts/ifcfg-enp0s3
cat >>/etc/sysconfig/network-scripts/ifcfg-enp0s3<<EOF
IPADDR=${hip}
NETMASK=255.255.255.0
GATEWAY=30.0.2.1
DNS1=30.0.2.1
EOF
hostnamectl set-hostname master1
hostnamectl set-hostname master1
reboot
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo .repo
yum makecache fast
yum -y install docker-ce
systemctl start docker && systemctl enable docker
docker --version
yum install -y net-tools
vip=30.0.2.10
master1=30.0.2.11
master2=30.0.2.12
master3=30.0.2.13
node1=30.0.2.14
netswitch=`ifconfig | grep 'UP,BROADCAST,RUNNING,MULTICAST' | awk -F: '{print $1}'`
cat >>/etc/hosts<<EOF
${master1} master1
${master2} master2
${master3} master3
${node1} node1
EOF
cat > /etc/sysconfig/dules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/dules && bash /etc/sysconfig/dules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum install -y keepalived haproxy ipvsadm ipset
mv /etc/f /etc/f.bak
nodename=`/bin/hostname`
cat >/etc/f<<END4
! Configuration File for keepalived
global_defs { router_id ${nodename}
}
vrrp_instance VI_1 { state MASTER interface ${netswitch} virtual_router_id 88 advert_int 1 priority 100 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { ${vip}/24 }
}
END4
cat >/etc/haproxy/haproxy.cfg<<END1
global chroot /var/lib/haproxy daemon group haproxy user haproxy log 127.0.0.1:514 local0 warning pidfile /var/lib/haproxy.pid maxconn 20000 spread-checks 3 nbproc 8
defaults log global mode tcp retries 3 option redispatch
listen https-apiserver bind ${vip}:8443 mode tcp balance roundrobin timeout server 15s timeout connect 15s server apiserver01 ${master1}:6443 check port 6443 inter 5000 fall 5 server apiserver02 ${master2}:6443 check port 6443 inter 5000 fall 5 server apiserver03 ${master3}:6443 check port 6443 inter 5000 fall 5
END1
systemctl enable keepalived && systemctl start keepalived
systemctl enable haproxy && systemctl start haproxy
ping 30.0.2.10
mv /etc/f /etc/f.bak
nodename=`/bin/hostname`
cat >/etc/f<<END4
! Configuration File for keepalived
global_defs { router_id ${nodename}
}
vrrp_instance VI_1 { state MASTER interface ${netswitch} virtual_router_id 88 advert_int 1 priority 90 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { ${vip}/24 }
}
END4
cat >/etc/haproxy/haproxy.cfg<<END1
global chroot /var/lib/haproxy daemon group haproxy user haproxy log 127.0.0.1:514 local0 warning pidfile /var/lib/haproxy.pid maxconn 20000 spread-checks 3 nbproc 8
defaults log global mode tcp retries 3 option redispatch
listen https-apiserver bind ${vip}:8443 mode tcp balance roundrobin timeout server 15s timeout connect 15s server apiserver01 ${master1}:6443 check port 6443 inter 5000 fall 5 server apiserver02 ${master2}:6443 check port 6443 inter 5000 fall 5 server apiserver03 ${master3}:6443 check port 6443 inter 5000 fall 5
END1
systemctl enable keepalived && systemctl start keepalived
systemctl enable haproxy && systemctl start haproxy
ping 30.0.2.10
MY_REGISTRY=registry-hangzhou.aliyuncs/openthings
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.14.2
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.14.2
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.14.2
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.14.2
docker pull ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.10
docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1
docker pull jmgao1983/flannel:v0.11.0-amd64
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.14.io/kube-apiserver:v1.14.2
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.14.io/kube-controller-manager:v1.14.2
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.14.io/kube-scheduler:v1.14.2
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.14.io/kube-proxy:v1.14.2
docker tag ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.io/etcd:3.3.10
docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.io/pause:3.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.io/coredns:1.3.1
docker tag jmgao1983/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
echo 'swapoff -a' >> /etc/profile
source /etc/profile
ssh-keygen -t rsa
cat <<EOF > /etc/sysctl.f
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system
cat << EOF > /pos.po
[kubernetes]
name=Kubernetes
baseurl=
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=.gpg .gpg
EOF
yum makecache fast
yum install -y kubelet kubeadm kubectl
echo 'Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"' >> /usr/lib/systemd/system/kubelet.service.f
systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
cat >kubeadm-init.yaml<<END1
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication
kind: InitConfiguration
localAPIEndpoint: advertiseAddress: ${master1} bindPort: 6443
nodeRegistration: criSocket: /var/run/dockershim.sock name: master1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer: timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "${vip}:6443"
dns: type: CoreDNS
etcd: local: dataDir: /var/lib/etcd
kubernetesVersion: v1.14.2
networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: "10.245.0.0/16"
scheduler: {}
controllerManager: {}
---
apiVersion: fig.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
END1
kubeadm init --config kubeadm-init.yaml
输出:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/f $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: /
You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 30.0.2.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:5f491180e10907314ab6cac766bf13dae7e851a0a3749b035bd071c67697303c --experimental-control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 30.0.2.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:5f491180e10907314ab6cac766bf13dae7e851a0a3749b035bd071c67697303c
mkdir -p $HOME/.kube
sudo cp -i /etc/f $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get cs
kubectl get pod --all-namespaces -o wide
输出:
[root@master1 ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-fb8b8dccf-jlwpw 0/1 Pending 0 7m22s <none> <none> <none> <none>
kube-system coredns-fb8b8dccf-mvzxv 0/1 Pending 0 7m22s <none> <none> <none> <none>
kube-system etcd-master1 1/1 Running 0 6m43s 30.0.2.11 master1 <none> <none>
kube-system kube-apiserver-master1 1/1 Running 0 6m35s 30.0.2.11 master1 <none> <none>
kube-system kube-controller-manager-master1 1/1 Running 0 6m47s 30.0.2.11 master1 <none> <none>
kube-system kube-proxy-c6tvk 1/1 Running 0 7m22s 30.0.2.11 master1 <none> <none>
kube-system kube-scheduler-master1 1/1 Running 0 6m30s 30.0.2.11 master1 <none> <none>
yum install -y wget
wget .yml
kubectl apply -l
kubectl -n kube-system get pod -o wide
输出:
[root@master1 ~]# kubectl -n kube-system get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-jlwpw 1/1 Running 0 11m 10.244.0.2 master1 <none> <none>
coredns-fb8b8dccf-mvzxv 1/1 Running 0 11m 10.244.0.3 master1 <none> <none>
etcd-master1 1/1 Running 0 11m 30.0.2.11 master1 <none> <none>
kube-apiserver-master1 1/1 Running 0 10m 30.0.2.11 master1 <none> <none>
kube-controller-manager-master1 1/1 Running 0 11m 30.0.2.11 master1 <none> <none>
kube-flannel-ds-amd64-6gx8f 1/1 Running 0 2m22s 30.0.2.11 master1 <none> <none>
kube-proxy-c6tvk 1/1 Running 0 11m 30.0.2.11 master1 <none> <none>
kube-scheduler-master1 1/1 Running 0 10m 30.0.2.11 master1 <none> <none>
MY_REGISTRY=registry-hangzhou.aliyuncs/openthings
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.14.2
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.14.2
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.14.2
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.14.2
docker pull ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.10
docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1
docker pull jmgao1983/flannel:v0.11.0-amd64
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.14.io/kube-apiserver:v1.14.2
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.14.io/kube-controller-manager:v1.14.2
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.14.io/kube-scheduler:v1.14.2
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.14.io/kube-proxy:v1.14.2
docker tag ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.io/etcd:3.3.10
docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.io/pause:3.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.io/coredns:1.3.1
docker tag jmgao1983/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
echo 'swapoff -a' >> /etc/profile
source /etc/profile
mkdir /root/.ssh
chmod 700 /root/.ssh
scp root@master1:/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
mkdir -p /etc/kubernetes/pki/etcd
scp root@master1:/etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} /etc/kubernetes/pki/
scp root@master1:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/
scp root@master1:/etc/f /etc/kubernetes/
mkdir -p $HOME/.kube
scp root@master1:$HOME/.kube/config $HOME/.kube/config
配置/etc/sysctl.f
cat <<EOF > /etc/sysctl.f
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system
cat << EOF > /pos.po
[kubernetes]
name=Kubernetes
baseurl=
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=.gpg .gpg
EOF
yum makecache fast
yum install -y kubelet kubeadm kubectl
echo 'Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"' >> /usr/lib/systemd/system/kubelet.service.f
启动kubelet
systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
kubeadm join 30.0.2.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:5f491180e10907314ab6cac766bf13dae7e851a0a3749b035bd071c67697303c --experimental-control-plane
kubectl -n kube-system get pod -o wide
echo 'swapoff -a' >> /etc/profile
source /etc/profile
yum install -y net-tools
vip=30.0.2.10
master1=30.0.2.11
master2=30.0.2.12
master3=30.0.2.13
node1=30.0.2.14
netswitch=`ifconfig | grep 'UP,BROADCAST,RUNNING,MULTICAST' | awk -F: '{print $1}'`
cat >>/etc/hosts<<EOF
${master1} master1
${master2} master2
${master3} master3
${node1} node1
EOF
mkdir /root/.ssh
chmod 700 /root/.ssh
scp root@master1:/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
cat <<EOF > /etc/sysctl.f
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system
cat > /etc/sysconfig/dules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/dules && bash /etc/sysconfig/dules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
MY_REGISTRY=registry-hangzhou.aliyuncs/openthings
docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.14.2
docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1
docker pull jmgao1983/flannel:v0.11.0-amd64
docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.14.io/kube-proxy:v1.14.2
docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.io/pause:3.1
docker tag jmgao1983/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
cat << EOF > /pos.po
[kubernetes]
name=Kubernetes
baseurl=
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=.gpg .gpg
EOF
yum makecache fast
yum install -y kubelet kubeadm kubectl
echo 'Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"' >> /usr/lib/systemd/system/kubelet.service.f
systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
mkdir -p $HOME/.kube
scp root@master1:$HOME/.kube/config $HOME/.kube/config
kubeadm join 30.0.2.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:5f491180e10907314ab6cac766bf13dae7e851a0a3749b035bd071c67697303c
kubectl get nodes
输出:
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 95m v1.14.3
master2 Ready master 30m v1.14.3
master3 Ready master 28m v1.14.3
node1 Ready <none> 90s v1.14.3
kubectl -n kube-system get pod -o wide
输出:
[root@node1 ~]# kubectl -n kube-system get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-jlwpw 1/1 Running 0 96m 10.244.0.2 master1 <none> <none>
coredns-fb8b8dccf-mvzxv 1/1 Running 0 96m 10.244.0.3 master1 <none> <none>
etcd-master1 1/1 Running 0 96m 30.0.2.11 master1 <none> <none>
etcd-master2 1/1 Running 0 31m 30.0.2.12 master2 <none> <none>
etcd-master3 1/1 Running 0 30m 30.0.2.13 master3 <none> <none>
kube-apiserver-master1 1/1 Running 0 96m 30.0.2.11 master1 <none> <none>
kube-apiserver-master2 1/1 Running 1 31m 30.0.2.12 master2 <none> <none>
kube-apiserver-master3 1/1 Running 0 29m 30.0.2.13 master3 <none> <none>
kube-controller-manager-master1 1/1 Running 1 96m 30.0.2.11 master1 <none> <none>
kube-controller-manager-master2 1/1 Running 1 30m 30.0.2.12 master2 <none> <none>
kube-controller-manager-master3 1/1 Running 0 29m 30.0.2.13 master3 <none> <none>
kube-flannel-ds-amd64-4rkmg 1/1 Running 0 3m20s 30.0.2.14 node1 <none> <none>
kube-flannel-ds-amd64-6gx8f 1/1 Running 0 87m 30.0.2.11 master1 <none> <none>
kube-flannel-ds-amd64-9dpjf 1/1 Running 0 30m 30.0.2.13 master3 <none> <none>
kube-flannel-ds-amd64-r8qkx 1/1 Running 0 31m 30.0.2.12 master2 <none> <none>
kube-proxy-8gb5d 1/1 Running 0 30m 30.0.2.13 master3 <none> <none>
kube-proxy-c6tvk 1/1 Running 0 96m 30.0.2.11 master1 <none> <none>
kube-proxy-dzp2x 1/1 Running 0 3m20s 30.0.2.14 node1 <none> <none>
kube-proxy-q46t6 1/1 Running 0 31m 30.0.2.12 master2 <none> <none>
kube-scheduler-master1 1/1 Running 1 95m 30.0.2.11 master1 <none> <none>
kube-scheduler-master2 1/1 Running 0 30m 30.0.2.12 master2 <none> <none>
kube-scheduler-master3 1/1 Running 0 29m 30.0.2.13 master3 <none> <none>
cat << EOF > nginxdeployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata: name: nginx-deployment
spec: replicas: 2 # tells deployment to run 2 pods matching the template template: # create pods using pod definition in this template metadata: # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is # generated from the deployment name labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
EOF
kubectl create -f nginxdeployment.yaml
kubectl get pods
输出:
[root@master1 ~]# kubectl create -f nginxdeployment.yaml
deployment.apps/nginx-deployment created
[root@master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6dd86d77d-9tvh9 1/1 Running 0 5m57s
nginx-deployment-6dd86d77d-fvzql 1/1 Running 0 5m57s
cat << EOF > nginxsvc.yaml
apiVersion: v1
kind: Service
metadata: name: nginx labels: run: nginx
spec: ports: - port: 80 name: http protocol: TCP targetPort: 80 selector: app: nginx
EOF
kubectl create -f nginxsvc.yaml
kubectl get svc
输出:
[root@master1 ~]# kubectl create -f nginxsvc.yaml
service/nginx created
[root@master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 108m
nginx ClusterIP 10.245.122.19 <none> 80/TCP 11s
curl 10.245.122.19
输出:
[root@master1 ~]# curl 10.245.122.19
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="/">nginx</a>.<br/>
Commercial support is available at
<a href="/">nginx</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
本文发布于:2024-02-04 06:10:25,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/170700720152971.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |