KubeEdge SIG AI致力于解决AI在边缘落地过程中的上述挑战,提升边缘AI的性能和效率。结合前期将边云协同机制运用在AI场景的探索,AI SIG成员联合发起了Sedna子项目,将最佳实践经验固化到该项目中。
Sedna基于KubeEdge提供的边云协同能力,实现AI的跨边云协同训练和协同推理能力,支持业界主流的AI框架,包括TensorFlow/Pytorch/PaddlePaddle/MindSpore等,支持现有AI类应用无缝下沉到边缘,快速实现跨边云的增量学习,联邦学习,协同推理等能力,最终达到降低成本、提升模型性能、保护数据隐私等效果。
kubectl get node -o wide
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-vptcdRUA-1632832380096)(C:Users82691AppDataRoamingTyporatypora-user-imagesimage-20210928201259481.png)]
SEDNA_GM_NODE设为主节点名称
SEDNA_GM_NODE=master
curl .sh | SEDNA_GM_NODE=$SEDNA_GM_NODE SEDNA_ACTION=create bash -
若网络不良,下载到本地后执行
export SEDNA_ROOT=/opt/sedna
SEDNA_GM_NODE=master
curl .sh | SEDNA_GM_NODE=$SEDNA_GM_NODE SEDNA_ACTION=create bash -# Check the GM status:
kubectl get deploy -n sedna gm
# Check the LC status:
kubectl get ds lc -n sedna
# Check the pod status:
kubectl get pod -n sedna
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BRThyYGN-1632832380097)(C:Users82691AppDataRoamingTyporatypora-user-imagesimage-20210928202153096.png)]
1.下载安装Kubectl
curl -LO .17.0/bin/linux/amd64/kubectl
2.使kubectl二进制可执行文件
chmod +x ./kubectl
3.将二进制文件移到PATH中
sudo mv ./kubectl /usr/local/bin/kubectl
4.测试安装版本
kubectl version --client
【The connection to the server localhost:8080 was refused - did you specify the right host or port?】
将master节点下/etc/f复制到edge节点,配置环境变量
vim /etc/profile
export KUBECONFIG=/etc/f【修改地址】
source /etc/profile
注释掉其他实验Dockerfile,本案例master保留big,edge保留little
./home/edge/sedna/examples/build_image.sh
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-EdTDhaXe-1632832380098)(C:Users82691AppDataRoamingTyporatypora-user-imagesimage-20210928202331851.png)]
Create Big Model Resource Object for Cloud
kubectl create -f - <<EOF
apiVersion: sedna.io/v1alpha1
kind: Model
metadata:name: helmet-detection-inference-big-modelnamespace: default
spec:url: "/data/big-model/yolov3_darknet.pb"format: "pb"
EOF
Create Little Model Resource Object for Edge
kubectl create -f - <<EOF
apiVersion: sedna.io/v1alpha1
kind: Model
metadata:name: helmet-detection-inference-little-modelnamespace: default
spec:url: "/data/little-model/yolov3_resnet18.pb"format: "pb"
EOF
边缘端:
mkdir -p /joint_inference/output
Create JointInferenceService
[注]修改节点名字,以及镜像版本
CLOUD_NODE=“cloud-node-name”
EDGE_NODE=“edge-node-name”
kubectl create -f - <<EOF
apiVersion: sedna.io/v1alpha1
kind: JointInferenceService
metadata:name: helmet-detection-inference-examplenamespace: default
spec:edgeWorker:model:name: "helmet-detection-inference-little-model"hardExampleMining:name: "IBT"parameters:- key: "threshold_img"value: "0.9"- key: "threshold_box"value: "0.9"template:spec:nodeName: $EDGE_NODEcontainers:- image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.4.0imagePullPolicy: IfNotPresentname: little-modelenv: # user defined environments- name: input_shapevalue: "416,736"- name: "video_url"value: "rtsp://localhost/video"- name: "all_examples_inference_output"value: "/data/output"- name: "hard_example_cloud_inference_output"value: "/data/hard_example_cloud_inference_output"- name: "hard_example_edge_inference_output"value: "/data/hard_example_edge_inference_output"resources: # user defined resourcesrequests:memory: 64Mcpu: 100mlimits:memory: 2GivolumeMounts:- name: outputdirmountPath: /data/volumes: # user defined volumes- name: outputdirhostPath:# user must create the directory in hostpath: /joint_inference/outputtype: DirectorycloudWorker:model:name: "helmet-detection-inference-big-model"template:spec:nodeName: $CLOUD_NODEcontainers:- image: kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.4.0name: big-modelimagePullPolicy: IfNotPresentenv: # user defined environments- name: "input_shape"value: "544,544"resources: # user defined resourcesrequests:memory: 2Gi
EOF
Check Joint Inference Status
kubectl get jointinferenceservices.sedna.io
Mock Video Stream for Inference in Edge Side
wget .1.0/EasyDarwin-linux-8.1.
tar -zxvf EasyDarwin-linux-8.1.
cd EasyDarwin-linux-8.1.0-1901141151
./start.shmkdir -p /data/video
cd /data/video
wget .
tar -zxvf ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video
结果位于/joint_inference/output文件夹下。
本文发布于:2024-01-28 05:34:06,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/17063912505159.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |