[RPI.CM4] Coral TPU Accelerator

阅读: 评论:0

[RPI.CM4] Coral TPU Accelerator

[RPI.CM4] Coral TPU Accelerator

官网链接:/

1. Edge TPU runtime

A. Add Debian package repository to system:

echo "deb  coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.listcurl .gpg | sudo apt-key add -sudo apt update

B. Install the Edge TPU runtime:

sudo apt install libedgetpu1-std

 

sudo apt remove  python3-apt
sudo apt install python3-apt

2. PyCoral Library

PyCoral是一个建立在TensorFlow Lite库之上的Python库,以加快你的开发速度,并为Edge TPU提供额外的功能。

A. Install Pycoral on Linux

sudo apt-get install python3-pycoral# or with pippython3 -m pip install --extra-index-url / pycoral~=2.0

3. 测试

A.下载案例代码

mkdir coral && cd coralgit clone .gitcd pycoral

B. 下载模型,标签和图片

bash examples/install_requirements.sh classify_image.py

C. 运行图片分类器

python3 examples/classify_image.py 
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite 
--labels test_data/inat_ 
--input test_data/parrot.jpg

 

4. 加速自己的模型

A. 将tensorflow saved model 转成 .tflite model(整形量化)

train_dataset = ain_reader_tfrecord(data_path&#ainpath,num_epochs=1,batch_Size=128)for batch_idx, data_batch in enumerate(train_dataset):global datadata = data_batch['data'].numpy().reshape(-1, 128, 54, 1)breakdef representative_data_gen():global datafor input_value in tf.data.Dataset.from_tensor_slices(data).batch(1).take(100):yield [input_value]converter = tf.lite.TFLiteConverter.from_saved_model("saved_model/")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
presentative_dataset = representative_data_gen
# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quant = vert()# Save the model.
tflite_models_dir = pathlib.Path("tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)# Save the quantized model:
tflite_model_quant_file = tflite_models_dir/"model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model_quant)
print("OK")

B. 将量化的.tflite model 转成TPU用的edgetpu.tflite model 

使用edgetpu-compiler软件,不过该软件已经不支持ARM,可以使用网络转换器

.ipynb#scrollTo=x47uW_lI1DoV

 

 

C. 加载并推理

有时候无法使用TPU,可能是因为Edge TPU runtime的版本不匹配(太新了),

apt list | grep libedgetpu1-std

 从官网下载其他版本的Edge TPU runtime并安装,Software | Coral

# Load the TFLite model and allocate tensors.
interpreter = tflite.Interpreter(model_path="../tflite_model/model_quant_edgetpu.tflite",experimental_delegates=[tflite.load_delegate('libedgetpu.so.1.0')])
interpreter.allocate_tensors()input_details = _input_details()output_details = _output_details()# 260msmerge_feature = tfLite.feature(audio_data, dis_data, mpu_x, mpu_y, mpu_z)# Test the model on random input data.# RPI4 USB3.0# unit8    45ms without tpu, 20ms with tpu# float32, 55ms without tpu# CM4 USB2.0# unit8 44ms without tpu, 77ms with tpu# float 48ms without tpuinterpreter.set_tensor(input_details[0]['index'], merge_feature.astype(dtype=np.uint8).reshape(1, 128, 54, 1))interpreter.invoke()# The function `get_tensor()` returns a copy of the tensor data.# Use `tensor()` in order to get a pointer to the tensor.output_data = _tensor(output_details[0]['index'])

 运行以上代码,如果USB TPU的灯闪烁,说明可以正常运行。

本文发布于:2024-02-04 21:12:45,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170716524259652.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:RPI   Coral   Accelerator   TPU
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23