The environment

  • Ubuntu 18.04 64
  • yolov5
  • deepsort
  • fastreid

preface

The detection and tracking of pedestrians have been introduced in the previous paper based on YOLOv5 and DeepSort target tracking. This article introduces another project that combines FastReid to detect, track, and reidentify pedestrians. Project address: github.com/zengwb-lx/Y… The two main examples given by the author are also very practical, including the statistics of pedestrian flow and the search and tracking of specific targets in the crowd.

Project emersion

First, create a new virtual environment

Conda create -n pytorch1.6 Python =3.7 Conda activate Pytorch1.6Copy the code

Then pull the source code

git clone https://github.com/zengwb-lx/Yolov5-Deepsort-Fastreid.git
cd Yolov5-Deepsort-Fastreid
Copy the code

Then install the other dependencies

# Without a GPU, PIP install Torch ==1.6.0+cu101 Torchvision ==0.7.0+ cu101-f https://download.pytorch.org/whl/torch_stable.html # editor requirements. TXT, Note out torch and Torchvision PIP install -r requirements. TXT fast_reid/fastreid/evaluation/rank_cylib make all cd .. /.. /.. /.. /Copy the code

Let’s start with a pedestrian counting demo

python person_count.py
Copy the code

The author of yolov5 has saved the model file on both googleapi and github, but the two yolov5s.pt files are different, you can check by md5sum, the github model file is correct

If you run the following error

2021-07-13 14:22:20 [INFO]: Loading weights from ./deep_sort/deep/checkpoint/ckpt.t7... Done! Traceback (most recent call last): File "person_count.py", line 244, in <module> yolo_reid.deep_sort() File "person_count.py", line 121, in deep_sort bbox_xywh, cls_conf, cls_ids, xy = self.person_detect.detect(video_path, img, ori_img, vid_cap) File "/home/xugaoxiang/Works/Yolov5-Deepsort-Fastreid/person_detect_yolov5.py", line 95, in detect pred = self.model(img, augment=self.augment)[0] File "/ home/xugaoxiang anaconda3 / envs pytorch1.6 / lib/python3.7 / site - packages/torch/nn/modules/module. Py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/xugaoxiang/Works/Yolov5-Deepsort-Fastreid/models/yolo.py", line 111, in forward return self.forward_once(x, profile) # single-scale inference, train File "/home/xugaoxiang/Works/Yolov5-Deepsort-Fastreid/models/yolo.py", line 131, in forward_once x = m(x) # run File "/ home/xugaoxiang anaconda3 / envs pytorch1.6 / lib/python3.7 / site - packages/torch/nn/modules/module. Py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/xugaoxiang/Works/Yolov5-Deepsort-Fastreid/models/yolo.py", line 36, in forward self.training |= self.export File "/ home/xugaoxiang anaconda3 / envs pytorch1.6 / lib/python3.7 / site - packages/torch/nn/modules/module. Py", line 772, in __getattr__ type(self).__name__, name)) torch.nn.modules.module.ModuleAttributeError: 'Detect' object has no attribute 'export'Copy the code

This is a problem with the model, and it is recommended to use the shell script that comes with the source code to download

sh weights/download_weights.sh
Copy the code

Let’s take a look at the basics of pedestrian traffic statistics:

Firstly, the author encapsulates the target detection of YOLOV5 into a class Person_detect, through which every pedestrian target in the video can be detected

Then, set a reference line in the picture, given the coordinates of both ends of the line can be

The line = [(0, int (0.48 * ori_img shape [0])), (int (ori_img. Shape [1]), int (0.48 * ori_img. Shape [0]))] cv2. Line (ori_img, line[0], line[1], (0, 255, 255), 4)Copy the code

Next, create trackers and start tracking each object detected by YOLOV5. Here, the center point of the target prediction box is used as the benchmark, and the figure below is how it is calculated

If the center points of the front and back frames intersect the base line, the line is considered to be crossed, but there is a question of direction. Up or down? Let’s look at another picture

The author uses the principle of tangent and arc tangent of a triangle, using the degrees method in math module to judge, if the Angle is >0, it is going up, otherwise it is going down

def vector_angle(midpoint, previous_midpoint):
    x = midpoint[0] - previous_midpoint[0]
    y = midpoint[1] - previous_midpoint[1]
    return math.degrees(math.atan2(y, x))
Copy the code

After looking at the pedestrian count example, let’s look at the target-specific rerecognition example

python person_search_reid.py
Copy the code

An error in the

Fusing layers... Traceback (most recent call last): File "person_search_reid.py", line 120, in <module> yolo_reid = yolo_reid(cfg, args, path=args.video_path) File "person_search_reid.py", line 35, in __init__ self.deepsort = build_tracker(cfg, args.sort, use_cuda=use_cuda) File "/home/xugaoxiang/Works/Yolov5-Deepsort-Fastreid/deep_sort/__init__.py", line 18, in build_tracker max_age=cfg.DEEPSORT.MAX_AGE, n_init=cfg.DEEPSORT.N_INIT, nn_budget=cfg.DEEPSORT.NN_BUDGET, use_cuda=use_cuda) File "/home/xugaoxiang/Works/Yolov5-Deepsort-Fastreid/deep_sort/deep_reid.py", line 29, in __init__ self.extractor = Reid_feature() File "./fast_reid/demo/demo.py", line 84, in __init__ cfg = setup_cfg(args) File "./fast_reid/demo/demo.py", line 35, in setup_cfg cfg.merge_from_file(args.config_file) File "./fast_reid/fastreid/config/config.py", line 107, in merge_from_file cfg_filename, allow_unsafe=allow_unsafe File "./fast_reid/fastreid/config/config.py", line 50, in load_yaml_with_base with PathManager.open(filename, "r") as f: File "./fast_reid/fastreid/utils/file_io.py", line 357, in open path, mode, buffering=buffering, **kwargs File "./fast_reid/fastreid/utils/file_io.py", line 251, in _open opener=opener, FileNotFoundError: [Errno 2] No such file or directory: '.. /.. /kd-r34-r101_ibn/config-test.yaml'Copy the code

This is the missing configuration file, go to the link below to download it

Link: pan.baidu.com/s/1bMG3qy7n… Extract code: HY1m

Save both files in the directory kD-r34-r101_ibn, then modify the source fast_reid/demo/demo

default='.. /.. /kd-r34-r101_ibn/config-test.yaml',Copy the code

to

default='kd-r34-r101_ibn/config-test.yaml',
Copy the code

Will be line 68

default=['MODEL.WEIGHTS', '../../kd-r34-r101_ibn/model_final.pth'],
Copy the code

to

default=['MODEL.WEIGHTS', 'kd-r34-r101_ibn/model_final.pth'],
Copy the code

Then run the script person_search_reid.py again and you get

As can be seen, since the features of two pedestrians (A1111111111 and B2222222222) have been mentioned in advance, these two people can be identified and tracked in the picture. By default, the feature file is saved under fast_reid/query

Feature extraction

If you also want to create a feature file, follow these steps

First of all, the need to intercept the pictures of the target people, stored in a folder named with a specific target, such as xugaoxiang.com here, so that the name xugaoxiang.com will be displayed in the back of the identification. Copy this folder to the fast_reid/query directory. The directory structure is as follows

(Pytorch1.6) xugaoxiang@1070Ti:~/Works/ yolov5-deepsort-fastreid /fast_reid/ Query $tree.├ ── names.npy ├── Query_features. Npy └ ─ ─ xugaoxiang.com ├ ─ ─ 10. PNG ├ ─ ─ 11. PNG ├ ─ ─ 12. PNG ├ ─ ─ 13. PNG ├ ─ ─ 14. PNG ├ ─ ─ 15. PNG ├ ─ ─ 1. PNG ├ ─ ─ 2. PNG ├ ─ ─ 3. PNG ├ ─ ─ 4. PNG ├ ─ ─ 5. PNG ├ ─ ─ 6. PNG ├ ─ ─ 7. PNG ├ ─ ─ 8. PNG └ ─ ─ 9. PNGCopy the code

Next, execute

cd fast_reid/demo
python person_bank.py
Copy the code

After the execution, the query_features.npy and names.npy in the query directory are updated

Finally, find a video that contains the target and test the effect

Engineering to download

Finally, all the required files will be packaged, need friends to download

Link: pan.baidu.com/s/1JiFzo5_H… Extract code: NAUU

The resources

  • Github.com/zengwb-lx/Y…
  • Blog.csdn.net/zengwubbb/a…
  • Fastreid profile