Software and Hardware Environment

  • yolov5
  • ncnn
  • Android studio 4.1.2
  • oneplus 8
  • Pytorch 1.6
  • onnx
  • netron

preface

In previous articles, we have covered the detection, training, visualization and other contents of YOLOV5 in detail. This article continues the topic of YOLOV5, this time we will look at how to use YOLOV5 in Android target detection.

What is a NCNN

Here’s the official definition

NCNN is an open source, high-performance neural network forward computing framework specially optimized for mobile terminals by Tencent. From the beginning, NCNN was designed with mobile deployment and use in mind, no third party dependencies, cross-platform, and mobile CPU faster than any known open source framework. With NCNN, developers can easily transfer deep learning algorithms to mobile devices for efficient execution, creating AI apps that bring AI to your fingertips.

NCNN already supports most CNN networks, including Yolov5 used in this article

  • Classical CNN: VGG AlexNet GoogleNet Inception …
  • Practical CNN: ResNet DenseNet SENet FPN …
  • Light-weight CNN: SqueezeNet MobileNetV1/V2/V3 ShuffleNetV1/V2 MNasNet …
  • Face Detection: MTCNN RetinaFace …
  • Detection: VGG-SSD MobileNet-SSD SqueezeNet-SSD MobileNetV2-SSDLite MobileNetV3-SSDLite …
  • Detection: Faster-RCNN R-FCN …
  • Detection: YOLOV2 YOLOV3 MobileNet-YOLOV3 YOLOV4 YOLOV5 …
  • Segmentation: FCN PSPNet UNet YOLACT …
  • Pose Estimation: SimplePose …

Project field

About the basic environment part, here you need to use the Android development environment, such as Android Studio, SDK, NDK, etc., this article will not do the introduction, if you have questions, you can leave a message in the message area.

We directly pull yolov5 for Android source code

git clone https://github.com/nihui/ncnn-android-yolov5
Copy the code

Then go to the NCNN release page and download the compiled package github.com/Tencent/ncn… If you are interested, you can compile it yourself through the NDK

Download, decompress, and copy to the app/ SRC /main/jni directory of the ncnn-android-yolov5 project. The directory structure is like this

In cmakelists. TXT, change the value of ncnn_DIR to

set(ncnn_DIR ${CMAKE_SOURCE_DIR}/${ANDROID_ABI}/lib/cmake/ncnn)
Copy the code

After saving, you can compile the project.

Here we use the real machine for testing, we need to open the developer mode of the phone, allow USB debugging, and open the APP after installation, the home page is like this

The layout of the interface is very simple, with three buttons in total, one for selecting the image, one for CPU detection, and one for GPU detection. Tests showed that the CPU was twice as slow as the GPU, and my OnePlus 8 GPU was only 5fps.

How to use your model

Once we have trained our inspection model, we need a mediator that allows us to transform between frameworks. Open Neural Network Exchange ONNX, which stands for Open Neural Network Exchange Format, is the intermediary we need.

Please refer to this article xugaoxiang.com/2020/07/02/ model training yolov5… As a test, we also use the mask detection model trained above

Install dependency libraries

pip install onnx coremltools onnx-simplifier
Copy the code

Execute the command

python models/export.py --weights runs/exp2/weights/best.pt
Copy the code

In the same directory as best.pt, we also generate best.onnx, best.mlModel and best.torchscript. Pt

It is important to note that the above export operation will report an error in PyTorch1.7 and YOLOV5 v4.0. My environment here is PyTorch1.6 and YOLOV5 3.0. The following error message is displayed

Converting op 143 : listconstruct Adding op '143' of type const Converting op 144 : listconstruct Adding op '144' of type const Converting op 145 : listconstruct Adding op '145' of type const Converting op x.2 : _convolution Converting Frontend ==> MIL Ops: 21/620 [3% | █ █ | 00:00 < 00:00, 1350.49 ops/s] CoreML export failure: unexpected number of inputs for node x.2 (_convolution): 13 the Export complete (12.83 s). The Visualize with https://github.com/lutzroeder/netron.Copy the code

This is a bug in CoremlTools. For more information, see github.com/ultralytics…

Next, use the onnX-Simplifier tool to simplify onNX and execute commands

python -m onnxsim runs/exp2/weights/best.onnx runs/exp2/weights/best-sim.onnx
Copy the code

Let’s start compiling NCNN by first preparing the base environment

sudo apt install build-essential libopencv-dev cmake
Copy the code

Build and install the Protobuf dependencies

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git submodule update --init --recursive
./autogen.sh
./configure
make
make install
sudo ldconfig
Copy the code

Once you’ve compiled and installed your Protobuf, check out the version number

Next, you need to compile NCNN in order to generate the onnx to NCNN command line tool

git clone https://github.com/Tencent/ncnn.git
cd ncnn
git submodule update --init
mkdir build
cd build
cmake ..
make -j8
make install
Copy the code

Once the build and installation is complete, you can use the onnx2ncnn tool to convert

cd tools/onnx
./onnx2ncnn ~/Works/weights/best-sim.onnx ~/Works/weights/model.param ~/Works/weights/model.bin
Copy the code

Oh, no, there is an error

This is because slice is not supported. To solve this problem, we need to edit the generated param file and open it using the text tool

The revised param looks like this

The first number in the second row is the number of layers, because we deleted 8 crops and 1 Concat, so its value is 201-9=192

In addition, the shape output Grid at the layer needs to be changed to -1, in order to address the actual problem of multiple detection boxes

You can use the netron tool to view the network structure, Windows, Linux, MacOS, github.com/lutzroeder/…

In the picture, we will delete Split, Concat and 8 Crop nodes, and add a new node YoloV5Focus, which matches the class name in the Android source yolov5ncnn_jni.cpp. Here you can use the text editor in conjunction with Netron and view it as soon as you make changes.

Next, replace yolov5s.param and yolov5s.bin in the Assets folder of the original Android project

Then modify the source file yolov5NCNn_jni.cpp to modify the output of the two Permute nodes

Finally, modify class Names

Recompile the project, connect the phone, install apK and run

The final detection results are as follows

FAQ

Here are a few frequently asked questions for your reference.

Could not install Gradle distribution from 'https://services.gradle.org/distributions/gradle-5.4.1-all.zip'.
Copy the code

Close the android studio, manual from the site services.gradle.org/distributio… Download package, and then enter the folder C: \ Users \ Administrator gradle \ wrapper \ dists \ gradle 5.4.1 – all \ 3221 gyojl5jsh0helicew7rwx, delete all of the content of the original, Copy the downloaded package, open Android Studio again, and click Sync Project with Gradle Files in the upper right corner

Cause: jcenter.bintray.com:443 failed to respond
Copy the code

This problem may be related to the Proxy. Go to File –> Settings –> HTTP Proxy and disable the Proxy

Or edit the file ~.gradle\gradle.properties to comment out the proxy-related statements

Another error is a TIFF error message while compiling NCNN

Here is mainly anaconda environment cause, I do is completely out of the Anaconda environment

conda deactivate
unset LD_LIBRARY_PATH
Copy the code

The last problem is a common error encountered by model transformations

This error is caused by the output in yolov5NCNn_jni.cpp not matching the actual model

Download the source code

Baidu network backup link: pan.baidu.com/s/1U4XfNSeM… Extract code: X8OI

The resources

  • The official making
  • What’s new in 4.0?
  • How to train the model
  • ncnn
  • Visualization of model training
  • Android Studio Gradle build failure solution
  • Github.com/daquexian/o…
  • Github.com/protocolbuf…
  • Github.com/Tencent/ncn…