Environment configuration

  • Install the protobuf
Sudo pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple protobuf = = 3.8.0Copy the code
  • Install onnx
sudo apt-get install protobuf-compiler libprotoc-dev 
Copy the code
  • Install the pillow
sudo pip3 install Pillow
Copy the code
  • Install PyCUDa: If the following script cannot be installed, see [[PyCUDa installed on Jetson Nano]]
export PATH=/usr/local/cuda/bin:\${PATH}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:\${LD_LIBRARY_PATH}
sudo pip3 install pycuda
Copy the code
  • Install numpy
sudo pip3 install numpy
Copy the code

Run the TRT – yolov3

📦Github: TRT-yolov3

  1. Download the configuration file and weight file toTRT-yolov3/yolov3_onnx/In the
    • Objectstorage.ca-toronto-1.oraclecloud.com/n/yzpqsgba6…
    • Hidden – boat – 623 a. keviny – cloud. Workers. Dev/DeepLearnin…

You can also use trt-yolov3 /yolov3_onnx/download.sh to download or view the file optionally (you know, extremely slow, remember to find a way to speed it up, I found two for you).

  1. Modify thedownload.shFile, save only the following parts, and execute the scriptsudo ./download.sh
#! /bin/bashset -e echo echo "Creating YOLOv3-Tiny-288 and YOLOv3-Tiny-416 configs..." cat yolov3-tiny.cfg | sed -e '8s/width=416/width=288/' | sed -e '9s/height=416/height=288/' > yolov3-tiny-288.cfg echo >> yolov3-tiny-288.cfg ln -sf yolov3-tiny.weights yolov3-tiny-288.weights cp yolov3-tiny.cfg yolov3-tiny-416.cfg echo >>  yolov3-tiny-416.cfg ln -sf yolov3-tiny.weights yolov3-tiny-416.weights echo echo "Done."Copy the code

The following is an example of Yolov3-tiny-416. Both steps are slow. Please wait patiently

  1. will.cfgconvert.onnx
python3 yolov3_to_onnx.py --model yolov3-tiny-416
Copy the code
  1. will.onnxconvert.trt
python3 onnx_to_tensorrt.py --model yolov3-tiny-416
Copy the code

Test (identification)

python3 detector.py --file --filename data/test.mp4 --model yolov3-tiny-416 --runtime
Copy the code

[Camera] Put the trt-yolov3-detector-camera.py script in the trt-yolov3 / directory, change the absolute path in the tenth line, and directly execute the script to call the camera for identification


Model Replacement (in detail)

If TRt-yolov3 has successfully run through the rule, it is theoretically possible to skip the part, which is the part of the decomposition that I found the model transformation method before I discovered the trt-Yolov3 project. If you want to learn more about how to go from.cfg to.trt step by step, you can read on

The idea is to convert yOLO’s original.cfg configuration file to the.trt file that TensorRT will use

yolov3-tiny -> onnx

  1. Create the yolov3_tiny_to_onnx.py file
  2. Preparing model configuration.cfgFiles and weights.weightsFile in the same directory as the py script

I used to rename both files to the same name, such as yolov3-tiny-416.cfg and Yolov3-tiny-416.weights

  1. Run the following script.onnxModel configuration file
python3 yolov3_tiny_to_onnx.py --model yolov3-tiny-416
Copy the code

[Note about.cfg here]

I changed the original py file a bit. The original PY can only read configuration files at layers 288, 416, and 608. This restriction has been removed here, but it is unclear what the problem will be

But the CFG file still has certain formatting constraints

  1. There is only one blank line between each layer
[convolutional]
batch_normalize=1

[maxpool]
size=2
Copy the code
  1. The last two items in layer 1 [NET] are changed to
steps=400000
scales=.1
Copy the code
  1. The CFG file has only two blank lines at the end

onnx -> trt

  1. Create file onnx_to_tensorrt.py
  2. Run the following script to generate.trtModel file
Python3 onnx_to_tensorrt. Py - model yolov3 - tiny - 416Copy the code

Resource

  • Jetson Nano uses Yolov3-Tiny and TensorRT acceleration to achieve near real time target detection and recognition
  • Trt-yolov3: Yolov3-tiny recognition on Jetson Nano (finished
  • Using TensorRT to accelerate Tiny at 3ms/ frame