Live up to the time, the creation of non-stop, this article is participating in 2021 year-end summary essay contest

preface

OpenCV target tracking using Python. Without further ado.

Let’s have a good time

The development tools

Python version: 3.6.4

Related modules:

Cv2 module;

And some modules that come with Python.

Environment set up

Install Python and add it to the environment variables. PIP installs the required related modules.

Target tracking refers to the process of locating a moving target in a video.

In today’s AI industry, there are many application scenarios, such as monitoring, assisted driving and so on.

Inter-frame difference

By calculating the difference between video frames (that is, considering the difference between the background frame and other frames), target tracking can be achieved

Code implementation

import cv2

# Get video
video = cv2.VideoCapture('007.mp4')

Generate an ellipse structure element
es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9.4))
# Set the background frame
background = None

while True:
    Read every frame of the video
    ret, frame = video.read()

    Get the background frame
    if background is None:
        # Turn the first frame of the video into a grayscale image
        background = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        # Gaussian blur to grayscale image, smooth image
        background = cv2.GaussianBlur(background, (21.21), 0)
        continue

    # Turn every frame of the video into a grayscale image
    gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Gaussian blur to grayscale image, smooth image
    gray_frame = cv2.GaussianBlur(gray_frame, (21.21), 0)

    # Obtain the image difference between the current frame and the background frame to get the difference graph
    diff = cv2.absdiff(background, gray_frame)

    # A black and white image is obtained by threshold segmentation using pixel point values
    diff = cv2.threshold(diff, 25.255, cv2.THRESH_BINARY)[1]

    # Expand the image to reduce errors
    diff = cv2.dilate(diff, es, iterations=2)

    # Get the target contour in the image
    image, cnts, hierarchy = cv2.findContours(diff.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    for c in cnts:
        if cv2.contourArea(c) < 1500:
            continue
        Draw the target rectangle
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(frame, (x+2, y+2), (x+w, y+h), (0.255.0), 2)

    # Display detection video
    cv2.namedWindow('contours'.0)
    cv2.resizeWindow('contours'.600.400)
    cv2.imshow('contours', frame)

    # Show the difference video
    cv2.namedWindow('diff'.0)
    cv2.resizeWindow('diff'.600.400)
    cv2.imshow('diff', diff)
    if cv2.waitKey(1) & 0xff= =ord('q') :break

# end
cv2.destroyAllWindows()
video.release()
Copy the code

Background divider

OpenCV provides a BackgroundSubtractor class that can be used to split video foreground and background.

Background detection can also be improved through machine learning.

There are three kinds of background segmentation, namely KNN, MOG2 and GMG, which calculate background segmentation by corresponding algorithm.

The BackgroundSubtractor class can compare different frames and store previous frames to improve the results of motion analysis over time.

Also can calculate the shadow, through the detection of shadow, eliminate the detection of image shadow area.

Code implementation

import cv2

# Get video
video = cv2.VideoCapture('traffic.flv')
# KNN Background splitter, set shadow detection
bs = cv2.createBackgroundSubtractorKNN(detectShadows=True)

while True:
    Read every frame of the video
    ret, frame = video.read()
    # Calculate the video's foreground mask
    fgmask = bs.apply(frame)
    # Image thresholding
    th = cv2.threshold(fgmask.copy(), 244.255, cv2.THRESH_BINARY)[1]
    # Expand the image to reduce errors
    dilated = cv2.dilate(th, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3.3)), iterations=2)

    # Get the target contour in the image
    image, contours, hier = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    for c in contours:
        if cv2.contourArea(c) > 1600:
            Draw the target rectangle
            (x, y, w, h) = cv2.boundingRect(c)
            cv2.rectangle(frame, (x, y), (x+w, y+h), (255.255.0), 2)

    # Show the difference video
    cv2.imshow('mog', fgmask)
    # cv2.imshow('thresh', th)
    # Display detection video
    cv2.imshow('detection', frame)
    if cv2.waitKey(30) & 0xff= =ord('q') :break

video.release()
cv2.destroyAllWindows()

Copy the code

The results are as follows