The environment

  • Ubuntu 18.04 64
  • GTX 1070Ti
  • Anaconda with python 3.7
  • Pytorch 1.6.0 + CUDA 10.1
  • Fast – Reid 1.3.0

preface

FastReID is a SOTA-level ReID algorithm research platform launched by JD AI Research Institute. It is also the most complete ReID toolbox with the highest performance so far, including model training, model evaluation, model fine-tuning and model deployment. The training part not only supports single-card and multi-card GPU training, but also supports multiple machines training.

FastReID can be used not only for pedestrians but also for other objects such as vehicles. The default trainable vehicle data set includes VeRi, VehicleID and VERIWild, which has a wide range of application scenarios.

architecture

This is a system architecture diagram taken from his paper

The figure above contains two parts, the training part and the reasoning part

The training component includes

  1. Pre-processing, namely, pre-processing, includes various data enhancement methods, such as Resize, Flipping, Random erasing, auto-augment, Random patch, Cutout, etc

  2. Backbone refers to Backbone networks such as ResNet, ResNest, ResNeXt, Instance Batch Normalization (IBN), and non-local Normalization

  3. Aggregation refers to Aggregation, which aggregates the features generated by the backbone network into a global feature, such as Attention, Gem Pooling, Avg Pooling and Max Pooling

  4. Head module, used to generate global features for normalization, latitude reduction

  5. Loss includes cross-entropy Loss, Triplet Loss, Arcface Loss, and Circle Loss

  6. In addition, the training strategy includes Learning rate, warm-up, Backbone Freeze and Consine Decay

The reasoning part includes

  1. Distance Metric support for Euclidian Distance, cosine and Deep Spatial Reconstruction(DSR)

  2. Post processing, K-frame and Query Expansion(QE)

repetition

Create a new virtual environment

Conda create -n PyTorch1.6 python=3.7 conda activate PyTorch1.6Copy the code

Then pull the source code, the current stable version is V1.3.0

Wget https://github.com/JDAI-CV/fast-reid/archive/refs/tags/v1.3.0.zip unzip v1.3.0.zip CD - 1.3.0 fast - ReidCopy the code

Finally, install the other dependencies

# If there is no GPU, PIP Install Torch ==1.6.0+cu101 TorchVision ==0.7.0+ cu101-f https://download.pytorch.org/whl/torch_stable.html PIP install - r docs/requirements. TXT # use cython accelerate PIP install cython cd fastreid/evaluation/rank_cylib make allCopy the code

Training and Verification

FastReID already has native support for a variety of datasets such as Market-1501, DukeMTMC, MSMT17, VehicleID, VeRi and VERIWild. Datasets are stored in the dataset folder by default, but you can also specify other directories using the FASTREID_DATASETS environment variable, or soft link to the real data directory.

Let’s use the market-1501 dataset as an example for training. First, download the dataset

Link: pan.baidu.com/s/1i9aiZx-E… Extraction code: UP8X

After downloading, unzip to the directory dataset, the directory structure is like this

(Pytorch1.6) xugaoxiang@1070Ti:~/workshop/ quick-reid-1.03 / ├── market-151-v15.07.15 ├─ │ ├── │ ├── │ ├── │ ├── │ exdirectoriesCopy the code

And then you can train. Use the command

python tools/train_net.py --config-file ./configs/Market1501/bagtricks_R50.yml
Copy the code

The trained model is evaluated

python tools/train_net.py --config-file ./configs/Market1501/bagtricks_R50.yml --eval-only MODEL.WEIGHTS "logs/market1501/bagtricks_R50/model_final.pth"
Copy the code

Visualization of training results

python ./demo/visualize_result.py --config-file "configs/Market1501/AGW_R50.yml" --vis-label --dataset-name 'Market1501'  --output 'logs/market1501/agw_R50/agw_market1501_vis' --opts MODEL.WEIGHTS "logs/market1501/agw_R50/model_final.pth"Copy the code

I made an error here

Traceback (most recent call last):
  File "demo/visualize_result.py", line 127, in <module>
    distmat = distmat.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Copy the code

We modify L127 in demo/visualize_result.py to

distmat = distmat.numpy()
Copy the code

Instead of

distmat = distmat.cpu().numpy()
Copy the code

I have submitted pr to the official warehouse, hoping to be merged into it. We then execute the above script again

The results are saved under logs/market1501/agw_R50/ AGW_market1501_VIS

Convert to another format

FastReID provides a series of Python scripts to help you quickly convert FastReID models into models in other formats such as Caffe, ONNX, TRT, etc

Let’s take the conversion of onNx as an example to look at the specific steps

python tools/deploy/onnx_export.py --config-file configs/Market1501/bagtricks_R50.yml --name baseline_R50 --output onnx_model --opts MODEL.WEIGHTS logs/market1501/bagtricks_R50/model_final.pth
Copy the code

Once transformed, we prepare several images of the same person and place them in the Input folder

Next to execute

python tools/deploy/onnx_inference.py --model-path onnx_model/baseline_R50.onnx --input input/*.jpg --output onnx_output
Copy the code

The resources

  • Github.com/JDAI-CV/fas…
  • Arxiv.org/pdf/2006.02…
  • Xugaoxiang.com/2021/03/10/…