“Home country dream” is a recent very hot without krypton gold of the hand tour, in the surrounding classmates friends encouraged, I went on the “road of no return”. The gameplay is pretty simple, just pick up the coins and carry the goods, save up the coins and upgrade the building. In the process, we can also learn about current national policies.

The simplicity of the gameplay gave me the idea of automated testing.

Project address: github.com/Jiahonzheng… . Demo video: www.bilibili.com/video/av692… .

With complte patterns simulator

We used MuMu simulator launched by netease Games to conduct automated tests. The installation process is very simple and will not be mentioned here. The key points here are to enable the USB debug option and the ADB debug address (127.0.0.1:7555).

BTW, since the mobile phone I use is SONY Xperia Z5 Premium, I set the resolution of the simulator to 1920*1080, and all the materials used in the project are made based on this resolution.

UIAutomator2

We use UIAutomator2 as an automated test tool, and its workflow is roughly as follows:

  • Install the ATX daemon on the mobile device, which starts the UIAutomator2 service (default port: 7912) to listen.
  • We write the test script on the PC and send the script to the server side of the mobile device.
  • The mobile device receives the script sent by the PC through Wi-Fi or USB and performs the specified operations.

We complete the installation and initialization of UIAutomator2 by executing the following commands (make sure the ADB connection is complete).

# install dependencies
python -m pip install uiautomator2

Install the ATX application
python -m uiautomator2 init
Copy the code

After installing the ATX app, we clicked the “Start UIAutomator2” button inside the app to make sure the service was started. We then write and execute the following code to generate a screen shot.

import uiautomator2 as u2

d = u2.connect("127.0.0.1:7555")
d.screenshot("Game.jpg")
Copy the code

Sliding screen to pick up money

In the game, each building generates a certain amount of gold, which we can slide between buildings to pick up. Swipe to automate the swipe, we can call the device.swipe method, which is a touch swipe function provided by UIAutomator2. We need to pass in the starting and ending screen coordinates.

For the convenience of development, we set up corresponding numbers for each land, as shown in the figure below.

The mapping between the numbers and screen positions is as follows. Note that this is the screen position at 1920*1080.

@staticmethod
def _get_position(key):
    """ Gets the screen position of the specified building. "" "
    positions = {
        1: (294.1184),
        2: (551.1061),
        3: (807.961),
        4: (275.935),
        5: (535.810),
        6: (799.687),
        7: (304.681),
        8: (541.568),
        9: (787.447)}return positions.get(key)
Copy the code

Our strategy for picking up coins on the slide screen is simple: slide the screen three times, the first time is 1-3 building, the second time is 4-6 building, the third time is 7-9 building.

def _swipe(self):
    """ Swipe to harvest coins. "" "
    for i in range(3) :Slide horizontally 3 times.
        sx, sy = self._get_position(i * 3 + 1)
        ex, ey = self._get_position(i * 3 + 3)
        self.d.swipe(sx, sy, ex, ey)
Copy the code

OpenCV

We can’t get enough Hierarchy information in the Weditor. Therefore, in order to realize the function of moving goods, we can only choose the strategy of image recognition: we take the screen snapshot of the game and determine whether there are goods in it. If there are goods, we move them to the target building. We can do this using OpenCV’s template matching feature.

First, we need to install the OpenCV dependency.

python -m pip install opencv
Copy the code

Let’s start with a simple test of cv2.matchTemplate to see how it works.

import cv2
# fetch snapshot
screen = cv2.imread('Game.jpg')
# Read the picture of the goods
template = cv2.imread('targets/Sofa.jpg')
Get the length and width information of the cargo picture
th, tw = template.shape[:2]
# call OpenCV's template matching method
res = cv2.matchTemplate(screen, template, cv2.TM_SQDIFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)

# min_val can be used to determine whether goods have been detected

Coordinates of the upper-left corner of the rectangle
tl = min_loc
Coordinates of the lower right corner of the rectangle
br = (tl[0] + tw, tl[1] + th)

cv2.rectangle(screen, tl, br, (0.0.255), 2)
cv2.imwrite('Result.jpg', screen)
Copy the code

By executing the code above, we can mark the position of Sofa (the object circled in red) in the snapshot, indicating that this method works.

Handling the goods

We have encapsulated the UIMatcher class to detect the presence of objects and act accordingly. According to the value of MIN_VAL, we judge whether the goods have been detected.

# Threshold judgment.
if min_val > 0.15:
    return None
Copy the code

We implement the move function in _match_target. Since OpenCV template matching also has “mental retardation”, we used redundant transport.

def _match_target(self, target: TargetType):
    """ Detect and move the cargo. "" "
    Get the current screen snapshot
    screen = self.d.screenshot(format="opencv")

    # Since OpenCV template matching is sometimes retarded, we implement redundancy of detection times.
    counter = 6
    whilecounter ! =0:
        counter = counter - 1

        # Detect cargo using OpenCV.
        result = UIMatcher.match(screen, target)

        # If no detection is found, stop the detection of the cargo.
        # Reason for redundancy: there is a deviation between the returned cargo screen position and the actual position, resulting in failure of movement
        if result is None:
            break

        sx, sy = result
        Get the screen position of the cargo destination.
        ex, ey = self._get_target_position(target)

        # Carrying goods.
        self.d.swipe(sx, sy, ex, ey)
Copy the code

assembler

So far, we have implemented the two core functions (picking up coins and moving goods). Now, we need to assemble it. Our approach is simple and crude. In the Start method of the Automator class, we move goods and pick up coins in a loop over and over.

def start(self):
    """ To launch the script, please make sure the game page is entered. "" "
    while True:
        # Determine whether goods are present.
        for target in TargetType:
            self._match_target(target)

        # Simple and crude way to handle "light of XX" honor display.
        Of course, image-detection mode can also be used.
        self.d.click(550.1650)

        # Swipe to harvest coins.
        self._swipe()
Copy the code

conclusion

In this blog post, we used the MuMu simulator, UIAutomator2, and OpenCV to automate the testing of The Family Dream game and solve the problem of automating two “core” gameplay: picking up coins and carrying goods. Of course, there are many areas that can be improved in our implementation, such as the Detection algorithm of goods. Maybe we can use machine learning to solve the Object Detection problem, ha ha ha.