1. Problem Overview

Artificial intelligence with fire computer vision talent demand, as a computer vision application development framework OpenCV is also more and more popular, market demand. Therefore, it is very important and necessary to learn Opencv technology on the basis of learning Python. This technology not only has a mature learning framework, but also has a wide range of applications. Learning a certain Opencv technology for college students, there is unusual significance.

2. Basic requirements of course design

The basic content of the design report includes at least three parts: cover, text and references.

1. The cover

The cover shall be filled in according to the template and shall not be changed at will.

2. The body of the

The main body of the design report is composed of the following parts:

(1) Background introduction and problem description

Describe the problem that requires programming to solve.

(2) Basic requirements

Give the program to achieve the specific requirements.

(3) Demand analysis

State the program’s task in unambiguous terms, emphasizing what the program is supposed to do. And clearly stipulates:

The form of input and the range of input values;

The form of output;

The functions that the program can achieve;

Test data: including correct input and output results and contain error input and output results.

(4) Outline design

Describes the process of the main program in this program and the hierarchical (call) relationship between the program modules.

(5) Detailed design

Implement all the data types defined in the outline design, give a list of key parts of the source program, and require that the program have sufficient comment statements, at least to comment the meaning of each function parameter and the meaning of the function return value.

(6) Debugging analysis

The content includes: how to solve the problems encountered in the debugging process and review the design and implementation of the discussion and analysis;

(7) User instructions

Explain how to use the program you’ve written, detailing the steps for each step.

(8) Test results

Design test data, or specify test data. The test data is required to be complete and strict, and the function of the designed program can be tested comprehensively.

(9)

(10) References

List references to relevant materials and books.

3.Opencv identification design requirement analysis

Object recognition has been a hot topic in the field of computer vision for more than ten years. There is a lot of work in this direction in the top conferences and journals every year. In some detailed application scenarios, such as face recognition and fingerprint recognition has been relatively mature applications. In this paper, by studying the relevant theory of object recognition, based on OpenCV design to achieve a colorful space painting, to try to solve some color path recognition, the main work is as follows: object recognition scheme based on color features: The first research content of this paper is the color recognition algorithm, which mainly includes two parts: feature extraction and feature matching. In-depth understanding of feature detection algorithms is helpful to select and optimize suitable algorithms for specific scenarios in practical application.

4. Outline analysis of colorful space drawing design

The main purpose of this project is to read the visual information shown to the camera through the camera, analyze the color to be identified, mark the color through dots, and continuously draw dots, so as to achieve the final effect of drawing. Which may use the window call recovery, window reversal; Corrosion, eliminate noise spots; Expansion, connecting the area divided by noise and shadow; Open operation, first corrosion after expansion; Closed operation, first expansion after corrosion; Morphological gradient; “Courtesy”; “Black hat”; Size adjustment; Window merge; Button add for parameter adjustment; Valletization and so on. Learn from different modules and integrate them into an overall project. This project is mainly based on Opencv knowledge involved in different chapters of Github official documents to learn and complete the project.

4.1 Colorful space drawing development environment

Computer model: Mechanical Revolution X6Ti version system OS: Experience Windows Feature Experience Pack 120.2212.31.0 Windows 10 Professional Edition 2020/9/10 PyCharm 2020.1 x64, VSCode, Python3.6

4.2 Opencv technology

OpenCV is a cross-platform computer vision and machine learning software library distributed under the BSD license (open source) that runs on Linux, Windows, Android, and Mac OS operating systems. [1] It is lightweight and efficient — it consists of a series of C functions and a small number of C++ classes. It also provides interfaces to Python, Ruby, MATLAB and other languages and implements many common algorithms in image processing and computer vision. OpenCV is written in C++ language, it has C++, Python, Java and MATLAB interfaces, and supports Windows, Linux, Android and Mac OS, OpenCV mainly tends to real-time visual applications, and utilizes MMX and SSE instructions when available, Support for C#, Ch, Ruby, and GO is also available. OpenCV runs on Windows, Android, Maemo, FreeBSD, OpenBSD, iOS, Linux and Mac OS. Consumers can get the official version at SourceForge or the development version from SVN. OpenCV also uses CMake. Some of the base classes in the DirectShow SDK are required to compile the parts of OpenCV related to camera input on Windows. This SDK is available from the pre-compiled Microsoft Platform SDK (or DirectX SDK 8.0 to 9.0C/DirectX Media SDK prior to 6.0) subdirectories Samples\Multimedia\DirectShow\BaseClasses. Python is a general-purpose programming language started by Guido van Rossum that quickly became very popular, mainly because of its simplicity and code readability. It allows programmers to express ideas in fewer lines of code without compromising readability. Python is slow compared to languages like C/C++. That said, Python can be easily extended using C/C++, which enables us to write computationally intensive code in C/C++ and create Python wrappers that can be used as Python modules. This gives us two advantages: first, the code is just as fast as the original C/C++ code (because it’s actual C++ code running in the background), and second, it’s easier to write code in Python than in C/C++. Opencv-python is a Python wrapper for the original OpenCV C++ implementation. Opencv-python makes use of Numpy, a highly optimized library for numerical calculations using MatLAB-style syntax. All OpenCV array structures convert to and from Numpy arrays. This also makes it easier to integrate with other libraries that use Numpy, such as SciPy and Matplotlib.

4.3 The overall structure design of colorful space drawing

Figure 1 shows the overall structure design of colorful space drawing, which is divided into four modules: the camera reads the image frame, obtains the color, the basic unit dot and realizes the drawing path. The camera image frame module is mainly to create a window in the screen, in this window to achieve the drawing path drawing; In the color acquisition module, the main use of CV2 corrosion, eliminate noise spots; Expansion and other functions, to obtain and identify the required recognition color; In the basic unit module, it is mainly to draw a circle with color, the circle is the basic unit of drawing path, by constantly repeating the circle, so as to draw path; In the module of realizing path drawing, the basic unit dot is repeated constantly through function to realize path drawing.

5. Detailed design of colorful space drawing

In the four modules, the first is to read the graphic frame design for the camera, and then to carry out simple recognition, and then to move the button by adding adjustment button to obtain the best recognition effect, so as to obtain the RGB value of the color to be recognized. Through the RGB value obtained, you can call the Opencv function to achieve the basic unit circle drawing, and then connect them, so as to achieve the realization of path drawing. The process requires many times of experimental debugging, each module may have many modifications, from the preliminary design to code writing, algorithm writing, and then to the optimization of algorithm code.

5.1 Design of image frame for camera reading

We need to find our colors, and need to use a webcam, and then we can create painting examples by placing different points wherever we find the colors. So the first thing we need is a webcam, so what we’re going to do is, according to chapter one of the official documentation, invoke our webcam.

import cv2

frameWidth = 640  # chapter1
frameHeight = 480  # chapter1
cap = cv2.VideoCapture(1)  # chapter1
cap.set(3, frameWidth)  # chapter1
cap.set(4, frameHeight)  # chapter1
cap.set(10.150)  # chapter1

while True:
    success, img = cap.read()
    cv2.imshow("Result", img)
    if cv2.waitKey(1) & 0xFF= =ord('q') :break
Copy the code

First we need to import the library, import the Opencv library by importing cv2. We set two variables, one is the width of the window 640, one is the height of the window 480, using the frameWidth = 640 and frameHeight = 480. Then set the ID numbers for the width and height of the window to 3 and 4, respectively. At the same time, we set the brightness of the window. After testing, we selected the brightness of 150 as the window screen brightness. VideoCapture function is the function to call the camera, the parameter is 1 means no call, the parameter is 0 means call, at first we set it to 1, when need to call, we set it to 0.

We then loop to get the position of the image, using the imshow function, and set the window name to Result. In order to have the function of exit, we set “Q” on the keyboard to exit.

At this point, we test it, and the expected effect is a real camera.

Run the program to test the camera, and the result is that the camera is read successfully. Four fingers are shown here and four fingers can also be seen on the screen. The test is successful.

Then test the program exit function, press Q on the keyboard to exit, and the test result is successful exit. When the input method is Chinese mode, you need to change to English mode first and then press Q to exit.

5.2 Obtaining the Color Design

After the window is created successfully, the color is obtained. All we have to do is find our colors in order to find our colors. We need to introduce the code of color detection. The content of this part is introduced in Chapter 7 of the official document. We can read related content for function learning and code writing. We need to test upper and lower thresholds, and we also need to convert the window into HSV space. In the code, we need to define a function to find our color, as follows.

def findColor(img, myColors, myColorValues) :  # Find the color function
    imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)  # chapter2
    count = 0  # counter to count the actual count
    newPoints = []
    for color in myColors:
        lower = np.array(color[0:3])  # chapter2 take the first three values
        upper = np.array(color[3:6])  # chapter2 take the value 3-6
        mask = cv2.inRange(imgHSV, lower, upper)  # chapter2
        x, y = getContours(mask)  Call the function that gets the contour
        Draw a circle, the center point is determined by x and y coordinates, the radius is set to 8, get the recognized color, and then fill the circle
        cv2.circle(imgResult, (x, y), 8, myColorValues[count], cv2.FILLED)
        If x and y are not equal to 0 before counting, append each time a new point is recorded
        ifx ! =0 andy ! =0:
            newPoints.append([x, y, count])
        count += 1
        # cv2.imshow(str(color[0]),mask)
    return newPoints
Copy the code

In the first step, we test the function by setting its input to an ING value, and then converting it to HSV space. To see if it works, we add cv2.imshow() to the test.

def findColor(img) :  # Find the color function
    imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    lower = np.array([h_min,s_min,v_min])
    upper = np.array([h_max,s_max,v_max])
    mask = cv2.inRange(imgHSV,lower,upper)
cv2.imshow("img",mask)
Copy the code

The cv2.imshow(“img”,mask) here is just for testing and will be removed later.

We don’t want to find just one color, we want to find different colors, so when we call the findColor() function, we want to find different types of colors and any colors that exist, and take that as output. So on top of the function, we can first define something in the form of a list, such as the minimum and maximum values of colors. We define a list called myColors, which is basically the list of colors we want to check. So we need to give the minimum and maximum phase saturation values when assigning values. Here we select some specific colors, such as red, blue, green, etc., to achieve the color reservation value. This helps us choose the right hue saturation value using the webcam.

Below, set the color code for the detection reservation

import cv2
import numpy as np

frameWidth = 640
frameHeight = 480
cap = cv2.VideoCapture(1)
cap.set(3, frameWidth)
cap.set(4, frameHeight)
cap.set(10.150)

def empty(a) :
    pass

cv2.namedWindow("HSV")
cv2.resizeWindow("HSV".640.240)
cv2.createTrackbar("HUE Min"."HSV".0.179,empty)
cv2.createTrackbar("SAT Min"."HSV".0.255,empty)
cv2.createTrackbar("VALUE Min"."HSV".0.255,empty)
cv2.createTrackbar("HUE Max"."HSV".179.179,empty)
cv2.createTrackbar("SAT Max"."HSV".255.255,empty)
cv2.createTrackbar("VALUE Max"."HSV".255.255,empty)

while True:

    _, img = cap.read()
    imgHsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)

    h_min = cv2.getTrackbarPos("HUE Min"."HSV")
    h_max = cv2.getTrackbarPos("HUE Max"."HSV")
    s_min = cv2.getTrackbarPos("SAT Min"."HSV")
    s_max = cv2.getTrackbarPos("SAT Max"."HSV")
    v_min = cv2.getTrackbarPos("VALUE Min"."HSV")
    v_max = cv2.getTrackbarPos("VALUE Max"."HSV")
    print(h_min)

    lower = np.array([h_min,s_min,v_min])
    upper = np.array([h_max,s_max,v_max])
    mask = cv2.inRange(imgHsv,lower,upper)
    result = cv2.bitwise_and(img,img, mask = mask)

    mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
    hStack = np.hstack([img,mask,result])
    #cv2.imshow('Original', img)
    #cv2.imshow('HSV Color Space', imgHsv)
    #cv2.imshow('Mask', mask)
   #cv2.imshow('Result', result)
    cv2.imshow('Horizontal Stacking', hStack)
    if cv2.waitKey(1) & 0xFF= =ord('q') :break

cap.release()
cv2.destroyAllWindows()
Copy the code

In the code here, it mainly involves the content of chapter 7 in the official documents for coding. It mainly integrates the original manual adjustment function into the button. When we call the button to move, the picture can be transformed into HSV space, and the color feature sample can be obtained more conveniently. Based on the code in Figure 3, add the button integration to set the upper and lower limits of the buttons, respectively, so that the tuning has some range.

We set cap = cv2.videocapture (1) to Cap = cv2.videocapture (0) and run the code to test it.

Above is the detection window, below the left is the color RGB value output, the right is the button adjustment.

First of all, we carry out the detection of red, and then the detection of blue and green. Red and blue mainly use red pen and blue, and green mainly use green parts on the correction belt.

Red detection result, I need to keep red white in the picture and try to keep other parts black.

We can see that in the screen, I have adjusted the parameter to keep it white, and the output is 0, HUE Min VALUE is 0, SAT Min VALUE is 91, VALUE Min VALUE is 0, HUE Max VALUE is 12, and SAT Max and VALUE Max VALUE are 255.

But the color on the hand was detected. So we need to adjust the HUE Max value, we set it to 9, and the color of the hand is effectively reduced.

So we write down the values we need, 0,91,0,9,255,255. So add these values to the myColor list.

As for the result of blue detection, I need to keep blue white in the picture and try to keep other parts black.

We can see that in the screen, I have adjusted the parameter to keep it white, and the output is 97, HUE Min VALUE is 97, SAT Min VALUE is 107, VALUE Min VALUE is 36, HUE Max VALUE is 179, SAT Max VALUE is 255, and VALUE Max VALUE is 193.

However, the color of the surrounding environment is detected, and we need to adjust it to optimize it as much as possible.

After adjustment, we get a final value of 104,100,33,174,255,226. Add it to the myColor list.

As for the result of green detection, I need to keep green white in the picture and try to keep other parts black.

The green parameter is easy to adjust, and no secondary optimization is required. The obtained parameter is 77,150,0,91,255,255. Add it to the myColor list.

We then get the minimum and maximum hue and saturation values for the three colors in our myColor list.

myColors = [[0.91.0.9.255.255], 
            [104.100.33.174.255.226],
            [77.150.0.91.255.255]]
Copy the code

Now we can create the mask simply. First, we need to import the Numpy library and set the Numpy library name to NP by importing Numpy as NP.

import cv2
import numpy as np

frameWidth = 640  # chapter1
frameHeight = 480  # chapter1
cap = cv2.VideoCapture(0)  # chapter1
cap.set(3, frameWidth)  # chapter1
cap.set(4, frameHeight)  # chapter1
cap.set(10.150)  # chapter1

myColors = [[0.91.0.9.255.255].Have minimum and maximum hue and saturation values
            [104.100.33.174.255.226],
            [77.150.0.91.255.255]]


def findColor(img) :  # Find the color function
    imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    lower = np.array([h_min, s_min, v_min])
    upper = np.array([h_max, s_max, v_max])
    mask = cv2.inRange(imgHSV, lower, upper)
    cv2.imshow("img", mask)


while True:
    success, img = cap.read()
    cv2.imshow("Result", img)
    if cv2.waitKey(1) & 0xFF= =ord('q') :break
Copy the code

Then we need to put the desired values into the lower, upper, and mask values. Lower represents the minimum hue and saturation value and is the first three values of each element of myColor, while upper represents the maximum hue saturation value and is the last three values of each element of myColor. Lower = np.array(myColors[0][0:3]); myColors = np.array(myColors[0][0:3]); We then need to take myColor as the input value for the findColor function to call within the function, setting def findColor(img,myColors). Again, we need the same Settings in upper, such as upper = Np.array (myColors[0][3:6]). Finally, you need to call the findColor function in the while loop.

while True:
    success, img = cap.read()
    findColor(img, myColors)
    cv2.imshow("Result", img)
    if cv2.waitKey(1) & 0xFF= =ord('q'):
        Break
Copy the code

Then run the program and test it.We can see that the red detection is fine, but this is just the red detection. We also need to detect all the colors in the list. To do this, we need to add a for loop that loops the variable color through myColor and replaces the original values with color[0:3] and color[3:6]. Finally, because there are multiple different Windows, the window name needs to be different, so we set the window name to the first value of each element in the myColor value using the STR function, which is 0,104 and 77, respectively.

def findColor(img,myColors) :  # Find the color function
    imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    for color in myColors:
        lower = np.array(color[0:3])
        upper = np.array(color[3:6])
        mask = cv2.inRange(imgHSV, lower, upper)
        cv2.imshow(str(color[0]), mask)
Copy the code

Run the code and test it.As you can see, we’re showing three colors at the same time, red, blue, and green from left to right.

If we need to add more colors, we can directly add the hue saturation value to the myColor list, and then we can directly identify them.

Cv2.imshow (STR (color[0]),mask) can now be commented out.

5.3 Basic unit dot design

We now need to find out where the object we want to detect is in the image, and to do this we need to get the contour, so we need to approximate the surrounding bounding box so we can find the object’s position. Here refer to chapter 8 of the official documentation for the knowledge of bounding box retrieval functions.

def getContours(img) :  # chapter8
    contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)  # chapter8
    x, y, w, h = 0.0.0.0
    for cnt in contours:
        area = cv2.contourArea(cnt)
        if area > 500:
            # cv2.drawContours(imgResult, cnt, -1, (255, 0, 0), 3)
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, 0.02 * peri, True)  # chapter8
            x, y, w, h = cv2.boundingRect(approx)  # chapter8
    return x + w // 2, y  # return the highest and center point of the tip, some values still need to be returned if greater than 500 is not detected
Copy the code

We need to create a new image in the while loop to be called in the getContours() function.

while True:
    success, img = cap.read()
    imgResult = img.copy()
    findColor(img, myColors)
    cv2.imshow("Result", img)
     if cv2.waitKey(1) & 0xFF= =ord('q'):
        Break
Copy the code

Now we can test to see if we print correctly. In the findColor function, we need to find the contour, so we need to call the getContours function in the findColor function and set the parameter to mask.

def findColor(img, myColors) :  # Find the color function
    imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    for color in myColors:
        lower = np.array(color[0:3])
        upper = np.array(color[3:6])
        mask = cv2.inRange(imgHSV, lower, upper)
        getContours(mask)
        # cv2.imshow(str(color[0]), mask)
Copy the code

Run the program to testSo testing won’t work, we need to find the error

The error is that in the last output of the while loop, we did not output the new image imgResult, but output img, which cannot be displayed because we deleted cv2.imshow(STR (color[0]), mask). Therefore, we need to change img to imgResult and then run the code test.

Red contour detectionBlue contour detectionGreen contour detection

We need the bounding box around the color, now we need to send these values, we can send the center point, but we want to draw from the nib rather than from the center of the detection object, so we will send the center. Therefore, we need to return x+w//2 and the y value, which will give us the highest point of the tip and its center. In case the area is not greater than 500, we need to return some value, so we need to set the initial value of x, y, w, and h to 0.

def getContours(img) :  # chapter8
    contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)  # chapter8
     x, y, w, h = 0.0.0.0
    for cnt in contours:
        area = cv2.contourArea(cnt)[]
        if area > 500:
            cv2.drawContours(imgResult, cnt, -1, (255.0.0), 3)
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, 0.02 * peri, True)  # chapter8
            x, y, w, h = cv2.boundingRect(approx)  # chapter8
    return x + w // 2, y
Copy the code

We need to get the x and y functions in the findColor function, so we assign the value of a point obtained by getContours(mask) to the x and y values. The x and y values correspond to the center of a circle, so we can start to draw a circle, we draw it on the resulting imgResult graph, the center points are x and y, and then we set the value of the radius to be 8, so that the circle is not too large, just the basic unit of path drawing. Fill the dot with cv2.FILLED color. For example, we first set the color to (255,0,0). The color mode used here is BGR, so we should see a blue solid dot in the result.

def findColor(img, myColors) :  # Find the color function
    imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    for color in myColors:
        lower = np.array(color[0:3])
        upper = np.array(color[3:6])
        mask = cv2.inRange(imgHSV, lower, upper)
        x, y = getContours(mask)
        cv2.circle(imgResult, (x, y), 8, (255.0.0), cv2.FILLED)
        # cv2.imshow(str(color[0]), mask)
Copy the code

redbluegreen

The test results successfully draw the dots in three colors.

This is a distinction to pay attention to, because the point we input is the center of the top of the image, so when the Angle of our image is not tilted, our dot will be on the object, and when the Angle of the image is tilted, the dot will be outside the object. When there is no tilt:With tilt:This is a tricky point to solve right now, so we’re not going to go into too much detail, but we’ll just stay straight for now. Now the change is that the dot color I need to display should not be our custom color, but the color of the object itself. Now that the contour has been detected correctly, we no longer need the contour, so we can delete it first.

def getContours(img) :  # chapter8
    contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)  # chapter8
    x, y, w, h = 0.0.0.0
    for cnt in contours:
        area = cv2.contourArea(cnt)
        if area > 500:
            #cv2.drawContours(imgResult, cnt, -1, (255, 0, 0), 3)
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, 0.02 * peri, True)  # chapter8
            x, y, w, h = cv2.boundingRect(approx)  # chapter8
    return x + w // 2, y
Copy the code

We need to change the color, so we need to define the color value, for example red is detected, what color should be on the drawing, so we name it myColorValues, and we also need to create a list in myColorValues, in which we need to define all the colors we want. Here we have three colors for now, so let’s define three list values. We can convert according to the corresponding hue saturation value, and the conversion website iswww.rapidtables.com/web/color/R…redbluegreenWe then write the color to the myColorValues list using BGR mode.

myColorValues = [[0.0.255].# red
                 [255.0.0].# blue
                 [0.255.0]]  # green
Copy the code

These are our colors now, and what we can do is draw solid circles based on these colors. We need the value of myColorValues as input. So you need myColorValues as the input parameter to findColor, and myColorValues as the input parameter to findColor in the while loop.

In the findColor function, we need a counter to actually count how many times, so we define a count variable and assign its initial value to 0. In findColor’s for loop, count increments by one each time a circle is drawn. In drawing the circle, we need to change the original preset color value to the value of myColorValues, and change the original (255, 0, 0) to myColorValues[count], so that we can get the BGR value of the color to be detected every time.

Run the program and test it. redbluegreen

5.4 Realizing the design of drawing paths

We now have the right colors and the right values, and what we need to do is plot these points in order to plot the path of motion. We will create a list of points, and then we will display it each time, looping through it, so at the top, first create a list called myPoints, the first value should be x, the second value should be y, and the third value should be colorID.

myPoints = []  ## [x , y , colorId ]
Copy the code

We need to create a function that loops through the myPoints list, checks the values of x and y, and draws. What we can do is create a new function called drawOnCanvas that takes myPoints and myColorValues as arguments. In the for loop, iterate over the myPoints list with the point variable, and place the drawcircle statement in findColor into the for loop, replacing the original x with point[0], y with point[1], and count with point[2]. We need to assign the value of findColor to newPoints in the while loop.

while True:
    success, img = cap.read()
    imgResult = img.copy()
    newPoints = findColor(img, myColors, myColorValues)
    cv2.imshow("Result", imgResult)
    if cv2.waitKey(1) & 0xFF= =ord('q'):
        Break
Copy the code

So when we detect a color, we need to return the value of the point in the findColor function. If the value of x and y is not detected, we do not need to return x and y. We just need to add. If x and y are not equal to 0, then we can just add them using append. Here we need a new list variable newPoints, where each element has x, y, and count. Newpoint.append ([x, y, count]). Each point is connected to form a line, but the line is not continuous. There will be some empty points, which need to be optimized below. Again, we are missing an initialization of the newPoints list variable, so when I initialize the count variable, I also initialize newPoints. Every time the loop defines a new point, so every point changes, so when I initialize, I define the newPoints list as an empty list. Finally, we return the value of the newPoints list.

def findColor(img, myColors, myColorValues) :  # Find the color function
    imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    count = 0
    newPoints = []
    for color in myColors:
        lower = np.array(color[0:3])
        upper = np.array(color[3:6])
        mask = cv2.inRange(imgHSV, lower, upper)
        x, y = getContours(mask)
        cv2.circle(imgResult, (x, y), 8, myColorValues[count], cv2.FILLED)
        ifx ! =0 andy ! =0:
            newPoints.append([x, y, count])
        count += 1
        # cv2.imshow(str(color[0]), mask)
    return newPoints
Copy the code

So every time we add these points, once we send these newPoints to the newPoints variable in the while loop. We can check if newPoints really exist. Check whether the length of newPoints is 0. If it is not zero, we can take a for loop and join it to newPoints.

while True:
    success, img = cap.read()
    imgResult = img.copy()
    newPoints = findColor(img, myColors, myColorValues)
    
    if len(newPoints)! =0:
        for newP in newPoints:
            myPoints.append(newP)

    cv2.imshow("Result", imgResult)
    if cv2.waitKey(1) & 0xFF= =ord('q'):
        Break
Copy the code

The reason we put the for loop here is because we return a list variable in findColor, so we can’t put a list inside a list. I got the list, but I need points, not lists. We need to decompose the list into points, so we need to add them later. If newPoints are not zero in length, then we need to draw them so that we can draw them on the canvas. Here we call drawOnCanvas function and make a call. The two parameters passed in are myPoints and myColorValues respectively.

while True:
    success, img = cap.read()
    imgResult = img.copy()
    newPoints = findColor(img, myColors, myColorValues)

    if len(newPoints) ! =0:
        for newP in newPoints:
            myPoints.append(newP)

    if len(myPoints) ! =0:
        drawOnCanvas(myPoints, myColorValues)

    cv2.imshow("Result", imgResult)
    if cv2.waitKey(1) & 0xFF= =ord('q'):
        Break
Copy the code

6. System debugging and analysis

In this report, the code test and analysis of the two modules of camera reading image frame, obtaining color and basic unit dot are carried out together with the classification module debugging test function in the detailed design. At present, the completion of the module to achieve the drawing path represents the success of the whole project, so let’s debug the whole project. Run the code for debugging. When the three colors appear in front of the camera, points corresponding to the color should appear. When the object moves, the corresponding points will be connected into a line, presenting a line, so as to achieve space drawing. Test red alonebluegreenTest them together

6.1 Running Process

We coded and debugged using Pycharm, clicked the Run button, and then drew the image.

6.2 Problems Encountered during System Debugging

(1) During contour drawing, we did not output the copied new image imgResult, but output img, which could not be displayed because we deleted cv2.imshow(STR (color[0]), mask). You need to change img to imgResult in the while loop and then run the code test. The results are as expected. (2) When drawing dots, we need to pay attention to the distinction, because the point we input is the center of the top of the image, so when the Angle of the image is not tilted, our dots will be on the object, when the tilt occurs, the dots will be outside the object. This is a tricky point to solve right now, so we’re not going to go into too much detail, but we’ll just stay straight for now. Now the change is that the dot color I need to display should not be our custom color, but the color of the object itself. (3) In the above test and the final image output, the images we get are all positive, contrary to the direction of our operation, which is not conducive to the users’ space drawing. So we need to mirror the entire window of the final production. The specific code is as follows.

if success == True:  Flip the output horizontally
        new_img = cv2.flip(imgResult, 180)
Copy the code

After the camera confirms that the reading is successful, the imgResult is flipped horizontally 180 degrees, and the image is assigned a new value named new_img. Finally, the imshow results are also output with new_img. After solving this problem, we also solved another problem, that is, the line drawing is not very continuous, just a collection of simple points. It has something to do with the fact that the image is not mirrored. The lines also become continuous after the new image is mirrored. Below is the test effect in redbluegreen

7. User instructions

First, we need to find the Run button in Pycharm, the green triangle in the top right.

Click the green triangle to run the program. Wait for some time, the camera selection box will appear below, click the selection box to draw in the selection box.

According to the mirror image, can be directly painted, with different colors, respectively, draw the RGB three word shape.Press Q to exit. If the input mode is Chinese, please switch to English mode to exit.

8. Test results

In the test, we found that because the color of the hand was close to orange, the color of the hand might be recognized as red, resulting in red in the wrong position. The problem here needs to be optimized in the color recognition module. Or new functions can be added to optimize the identification accuracy. In addition, in the process of operation, colored objects entering the window at the beginning will produce dots and dots, which cannot be immediately recognized and connected into a line. We believe that this point also needs further optimization. In this report, a simple space mapping can be achieved.

9. To summarize

Python is an object-oriented, literal computer programming language. What we think stands out in Python is its extraordinary flexibility with strings, its simplicity of indentation, and its simple syntax. Python is similar to C in that it is sequential, unlike Visual C + where events trigger different modules. The operation is similar to mat Lab in that there is an edit window and a run window (interactive interpreter), which can be run after writing or completed one by one in command line mode. After the experiment, I learned the basics of Python, had a preliminary understanding of the basic syntax and knowledge points of Python, and was able to compile good running modules. The Python experiment gave me confidence in future work and improved our ability to do real work. For learning Opencv, we believe that the best way is always to read the official documentation, and then do a lot of open source experimental testing, from which to learn the use and principle of a function or an algorithm, which is very important for beginners. Through repeated reading of official documents, we understand the functions that need to be used and how the function should be called, and then combine some examples to write the code. Although I encountered some problems in the process, I could not solve them for a while, but I also tried to find solutions to solve them. In general, this class is of great significance.

10. References

[1]python.jobbole.com/ Python basics tutorial site.

[2]www.w3cschool.cn/opencv/ OpencV official documentation.

[3] github.com/yuanxiaosc/… Image processing instance algorithm open source project.

[4] github.com/Asabeneh/30… Python open source learning project.

[5] Liu Ruizhen, YU Shiqi. OpenCV tutorial — Basics [M] Beijing: Beihang University Press.2007.

[6]G. Bradski, The OpenCV Library[M].ResearchGate:CiNii, 2000.

[7] zhang jiwen.C++ programming fundamentals [M]. Beijing: higher education press, 2003.

[8] Hou Junjie, Simple MFC[M]. Wuhan: Huazhong University of Science and Technology Press.2001.

[9] GONzales · Digital Image Processing (Second edition) [M]. Beijing: Publishing House of Electronics Industry, 2005.

[10] Yu Shiqi translated, Learning OpenCV (Chinese Version) [M]. Beijing: Tsinghua University Press, 2009.