1. Automation

One day you join a high-class technology company, happily doing software testing work, little by little every day, leave after work, play king with your girlfriend at night, life is very comfortable.

But the good times usually don’t last long, and this kind of life is soon broken by female executives. In order to improve the efficiency of your company’s testing, your company has decided to introduce an automated process. I searched python + Selenium on the Internet and quickly wrote a set of automated test scripts.

from selenium import webdriver
def test_selenium() :
    driver = webdriver.Firefox()
    driver.get("http://www.baidu.com")... driver.quit() ...Copy the code

Scripting days are tiring, and you work overtime every day without overtime pay. Even so, you don’t have to complain too much, because you can definitely feel like you’re getting a grip on the automated testing process and stepping into a new phase of your career. This script will soon be used in the company’s main process testing as well as in regression testing.

Because a lot of repetitive tasks can be replaced with automated test scripts, you’ll have time to spend with your girlfriend, paddle at work, and check your fund every now and then.

Of course, the good times don’t last long. In a major overhaul, the front end has changed a lot, and your automated test code is basically unusable because it has not been abstracted.

I can work overtime again, and my life can be fulfilled again. You’ve redesigned the code with some PageObject patterns and keyword drivers to make the test logic as configurable as possible. After the design is completed, when the front end page changes, only the key maintenance keyword table is needed.

You’ve done some good work for the company. You’re fully qualified for automated testing, and you can even manage a younger brother or two. They come to you with questions from time to time, but you’re still on your own for automated maintenance, which can grind to a halt when you take time off. The company wants you to make some improvements so that functional testers can run these automated tests as well.

2. Start testing the platform

You see a lot of people on the web talking about test platforms, thinking that you can do a visual platform, so that function testers can also use the underlying automation code by simply setting it up on the interface. Flask soon appears in your sights and the first feature you do is implement something similar to Jenkins’ build.

First, you set up a Flask service, and once it’s started, you have access to port 5000.

from flask import Flask
app = Flask(__name__)
app.run(port=5000)
Copy the code

Then, you configure a URL, and when you access this URL, the service calls a function, and the binding relationship between the URL and the function is the route. The return value of the function can be a normal string, JSON data, or AN HTML page.

@app.route('/')
def index() :
    "show all projects in workspace dir"
    workspace = pathlib.Path(app.root_path) / 'workspace'
    projects = [project.name for project in workspace.iterdir()]
    return render_template('index.html', projects=projects)
Copy the code

The above code mimics Jenkins by placing the automated test script in the project’s workspace directory. When accessing the/root path, the index function will be called. The index function lists all project names in the workspace directory and displays them in the front interface through return. The specific front-end code is as follows:

<h2>Show all the items</h2>
{% for p in projects %}
  <div>
    {{ p }} 
    <a href="/build? project={{p}}">build</a>
  </div>
{% endfor %}
Copy the code

Click build on the page, and the application will be redirected to the/Build URL set by flask, the route that runs the automated tests, the project parameter passed in by the user, and the project in the Workspace directory. Then execute the automated test instruction (pyTest instruction is used uniformly here).

@app.route("/build", methods=['get'.'post'])
def build() :
    project_name = request.args.get('project')
    pytest.main([f'workspace/{project_name}'])
    return "build success"
Copy the code

The complete process so far looks like this: First, the platform home page shows all the projects that can be built. These projects are essentially directories in the workspace subdirectory listed. Then, click the build button next to the project, jump to/Build, execute automation instructions based on the project name, wait for the automation task to complete, and return to Build Success.

3, optimize

You’ve basically implemented the functionality, and now the functionality testers can execute automated commands from your simple platform. However, there are some problems with this platform: first, no build information is collected, so you can’t see the results after testing; Second, the user must wait for the completion of the test script to return the specific results of the front end. If the automated test takes a long time to execute, the user will be stuck on the page and unable to do anything else.

You think of concurrent programming, where you create a child process to run the automated test script independently. Because the child process can be independent of the main process, the main process can immediately return results to the front end without waiting for the child to complete. So you rewrite the build function:

@app.route("/build")
def build() :
    id = uuid.uuid4().hex
    project_name = request.args.get('project')
    with open(id, mode='w', encoding='utf-8') as f:
        subprocess.Popen(
            ['pytest'.f'workspace/{project_name}'],
            stdout=f
        )
    return redirect(f'/build-history/{id}')
Copy the code

1. First, generate an ID number using the UUID to represent the build task, and then view the build records using this ID number;

Create a subprocess to run the automatic task, save the output to the file, the file name is the generated ID number, then want to see the result of the build, just need to read the content of the file;

3. As soon as the child process is successfully created, redirect to the url to view the build result without waiting for the child process to finish executing.

To see the build results, simply read the contents of the file back by ID.

@app.route("/build-history/<id>")
def build_history(id) :
    with open(id, encoding='utf-8') as f:
        data = f.read()
    return data
Copy the code

4. Generators

There is something wrong with the code above that reads the file. When the build is redirected to /build-histrory, the automated test script has just been executed and the contents of the read file are empty. Only as the test script runs and generates more and more run records does more content appear in the file, and you have to manually refresh the page to get the new content. When the automation takes a long time, you need to refresh the /build-history page constantly to get the latest build information. No more content is written to the file until the child process terminates.

To dynamically fetch file data, you use the generator lazy fetch, which reads the file contents as long as the child process running the automation task is running during the page load process and dynamically returns them to the front page.

To determine the status of the child process, pass the pid of the child process to /build-history during /build.

@app.route("/build", methods=['get'.'post'])
def build() :
    id = uuid.uuid4().hex
    project_name = request.args.get('project')
    with open(id, mode='w', encoding='utf-8') as f:
        proc = subprocess.Popen(
            ['pytest'.f'workspace/{project_name}'],
            stdout=f
        )
    return redirect(f'/build-history/{id}? pid={proc.pid}')
Copy the code

When viewing the results, write a generator stream that reads 100 lengths of data at a time from the file until the process is finished. In addition to post-build redirection, you can also manually enter the ID to view the historical build history. In this case, you only need to pass the ID, do not need to pass the process name, directly read the data in the file. Even if the file is very large, it can be loaded in batches without straining the server by reading too much data at the same time.

import psutil
@app.route("/build-history/<id>")
def build_history(id) :
    pid = request.args.get('pid'.None)
    def stream() :
        f = open(id, encoding='utf-8')
        if not pid:
            while True:
                data = f.read(100)
                if not data:
                    break
                yield data
        else:
            try:
                proc = psutil.Process(pid=int(pid))
            except:
                return 'no such pid'
            else:
                while proc.is_running():
                    data = f.read(100)
                    yield data
    return Response(stream())
Copy the code

Final effect:

5, summary

Generally speaking, automated testing only needs to do the first step, script can be executed, can replace the repetitive work. Being a test bed just makes the script easier to use.

But there are a lot of test platforms that make it more complicated to automate and require many, many parameters to run a complete test case, which seems like a bit of a bargain and is something that many people do.

This article implements a simple toy Jenkins through the Flask program to help understand how a tool like Jenkins performs tasks. Like a simple build task, there are a lot of problems to solve that we only think about when it comes to business.

I hope the tools we do are practical and easy to use.