(Article from Hogwarts Testing Institute)

This post is posted by Flystar, Testerhome’s community leader.

Recently, I plan to give priority to the coverage of interface testing, so I need to develop a testing framework. After thinking about it, I still want to do something different this time.

  • Interface testing is very efficient, and testers will want to get feedback quickly. However, the number of interfaces is usually large and increasing, so it is necessary to improve the efficiency of execution
  • The interface test use case can also be used to double as a simple stress test, which requires concurrency
  • There is a lot of duplication in interface test use cases, and testers should focus only on interface test design, which is best done automatically
  • Pytest and Allure are so useful that the new framework will integrate them
  • The use cases of interface test should be as simple as possible, and it is better to use YAML, so that data can be directly mapped to request data. Writing use cases is similar to filling in blank questions, which is convenient to promote to members without automation experience. Besides, I am very interested in Python coroutines and have learned them for some time, and I always hope to apply what I have learned. So I decided to use AIOHTTP for HTTP requests. However, PyTest does not support event loops, so it takes some work to combine them. So I kept thinking about it, and it turns out that I can actually divide the whole thing into two parts. The first part reads yamL test cases, requests the test interface through HTTP, and collects test data. In the second part, test cases approved by PyTest are dynamically generated from test data and then executed to generate test reports. That way, the two fit together perfectly, and it fits perfectly with the vision I’m making. The idea is given, and then it happens.

Part 1 (The whole process is required to be asynchronous and non-blocking)

Read the YAML test case

A simple use-case template is designed in this way. The advantage of this is that the parameter name and aiohttp.clientSession ().request(method, URL,**kwargs) are directly corresponding. I can pass the request method directly with no effort, avoiding various conversions, concise and elegant. Strong expression.

args:
  - post
  - /xxx/add
kwargs:
  -
    caseName: add XXX data: Name:${gen_uid(10)}
validator:
  -
    json:
      successed: True
Copy the code

Aiofiles is a third-party library that can be used to read files asynchronously. Yaml_load is a coroutine that can guarantee that the main process reading yamL test cases will not be blocked and can obtain test case data by await yaml_load()

async def yaml_load(dir=' ', file=' ') :"""Read the YAML file asynchronously and escape the special value :param file: :return:"""
    if dir:
        file = os.path.join(dir, file)
    async with aiofiles.open(file, 'r', encoding='utf-8', errors='ignore') as f:
        data = await f.read()

    data = yaml.load(data)

    Match the syntax of the function call form
    pattern_function = re.compile(r'^\${([A-Za-z_]+\w*\(.*\))}$')
    pattern_function2 = re.compile(r'^ \ ${(. *)} $')
    # match syntax that takes the default value
    pattern_function3 = re.compile(r'^ \ $\ ((. *) \ $')

    def my_iter(data):
        """Recursive test case that converts template syntax to normal values according to different data types :param data: :return:"""
        if isinstance(data, (list, tuple)):
            for index, _data in enumerate(data):
                data[index] = my_iter(_data) or _data
        elif isinstance(data, dict):
            for k, v in data.items():
                data[k] = my_iter(v) or v
        elif isinstance(data, (str, bytes)):
            m = pattern_function.match(data)
            if not m:
                m = pattern_function2.match(data)
            if m:
                return eval(m.group(1))
            if not m:
                m = pattern_function3.match(data)
            if m:
                K, k = m.group(1).split(':')
                return bxmat.default_values.get(K).get(k)

            return data

    my_iter(data)

    return BXMDict(data)
Copy the code

As you can see, test cases also support certain template syntax, such as ${function}, $(a:b), which greatly expands the ability of testers to write use cases

HTTP request test interface

Request (method, URL,**kwargs). HTTP is also a coroutine, which can guarantee that network requests will not be blocked, and you can get interface test data with await HTTP ()

async def http(domain, *args, **kwargs):
    """HTTP request handler :param domain: service address :param args: :param kwargs: :return:"""
    method, api = args
    arguments = kwargs.get('data') or kwargs.get('params') or kwargs.get('json') or {}

    Add tokens to # kwargs
    kwargs.setdefault('headers', {}).update({'token': bxmat.token})
    # concatenate service address and API
    url = ' '.join([domain, api])

    async with ClientSession() as session:
        async with session.request(method, url, **kwargs) as response:
            res = await response_handler(response)
            return {
                'response': res,
                'url': url,
                'arguments': arguments
            }
Copy the code

Collect test data

The concurrency of coroutines is really fast, so we can introduce asyncio.semaphore (num) to control the concurrency

async def entrace(test_cases, loop, semaphore=None):
    """HTTP execution entry :param test_cases: :param semaphore: :return:"""
    res = BXMDict()
    In the update_cookies method of the CookieJar, if broadening =False and accessing an IP address, the client will not update the cookie information
    # This causes the session to fail to handle the login state correctly
    So the cookie_jar argument used here uses the manually generated CookieJar object and sets its unsafe to True
    async with ClientSession(loop=loop, cookie_jar=CookieJar(unsafe=True), headers={'token': bxmat.token}) as session:
        await advertise_cms_login(session)
        if semaphore:
            async with semaphore:
                for test_case in test_cases:
                    data = await one(session, case_name=test_case)
                    res.setdefault(data.pop('case_dir'), BXMList()).append(data)
        else:
            for test_case in test_cases:
                data = await one(session, case_name=test_case)
                res.setdefault(data.pop('case_dir'), BXMList()).append(data)

        return res


async def one(session, case_dir=' ', case_name=' ') :"""The entire execution of a test case, including reading. Yml test cases, executing HTTP requests, and returning the request results, is asynchronous and non-blocking :param session: session Session :param case_dir: Use case directory :param case_name: Use case name: return:"""project_name = case_name.split(os.sep)[1] domain = bxmat.url.get(project_name) test_data = await yaml_load(dir=case_dir,  file=case_name) result = BXMDict({'case_dir': os.path.dirname(case_name),
        'api': test_data.args[1].replace('/'.'_'})),if isinstance(test_data.kwargs, list):
        for index, each_data in enumerate(test_data.kwargs):
            step_name = each_data.pop('caseName')
            r = await http(session, domain, *test_data.args, **each_data)
            r.update({'case_name': step_name})
            result.setdefault('responses', BXMList()).append({
                'response': r,
                'validator': test_data.validator[index]
            })
    else:
        step_name = test_data.kwargs.pop('caseName')
        r = await http(session, domain, *test_data.args, **test_data.kwargs)
        r.update({'case_name': step_name})
        result.setdefault('responses', BXMList()).append({
            'response': r,
            'validator': test_data.validator
        })

    return result
Copy the code

The event loop is responsible for executing the coroutine and returning the results. In the final result collection, I sorted the results using the test case directory, which provides a good foundation for the automatic generation of PyTest-approved test cases

def main(test_cases):
    ""Param test_cases: :return: param test_cases: :return:""
    loop = asyncio.get_event_loop()
    semaphore = asyncio.Semaphore(bxmat.semaphore)
    # Tasks that need to be handled
    # tasks = [asyncio.ensure_future(one(case_name=test_case, semaphore=semaphore)) for test_case in test_cases]
    task = loop.create_task(entrace(test_cases, loop, semaphore))
    Register the coroutine to the event loop and start the event loop
    try:
        # loop.run_until_complete(asyncio.gather(*tasks))
        loop.run_until_complete(task)
    finally:
        loop.close()

    return task.result()
Copy the code

The second part

Dynamically generate PyTest approved test cases

Pytest first looks for the confTest. py file in the current directory, and if it finds it, runs it, then looks for the.py file that starts or ends test in the specified directory according to the command line arguments, and if it finds it, then analyzes its fixtures. Autotest =True or pytest.mark.usefixtures(a…) , run them first; Then go to find classes, methods, etc., the rules are similar. And that’s sort of how it works. The key to running a PyTest test is that there must be at least one testxx.py file approved by the PyTest discovery mechanism. The file contains the TestxxClass class and at least one def Testxx (self) method in the class. There are no test files approved by PyTest, so my idea is to create a bootped-up test file that makes PyTest work. You can use pytest.skip() to make the test methods skip. Then our goal is to dynamically generate use cases once PyTest is alive, and then discover those use cases, execute those use cases, generate test reports, all in one go.

# test_bootstrap.py
import pytest

class TestStarter(object):

    def test_start(self):
        pytest.skip('This is a test startup method, not executed')
Copy the code

I’m thinking of using fixtures, because fixtures have the setup capability, so I can pre-process things before importing TestStarter by defining a fixture with a scope of session and then marking use on it. I should have done my job by putting the use case generation in the fixture.

# test_bootstrap.py
import pytest

@pytest.mark.usefixtures('te'.'test_cases')
class TestStarter(object):

    def test_start(self):
        pytest.skip('This is a test startup method, not executed')
Copy the code

Pytest has a –rootdir parameter, and the fixture’s core purpose is to get to the target directory with –rootdir, find the.yml test files in it, run them, get the test data, and then create a testxx.py test file for each directory. The contents of the file are the contents of the Content variable, and these parameters are passed to the Pytest.main () method to perform the test of the test case, which is another PyTest inside PyTest! Finally, delete the generated test file. Note that this fixture needs to be defined in ConfTest.py, because PyTest has self-discovery capabilities for what is defined in ConfTest and no additional imports are required.

# conftest.py
@pytest.fixture(scope='session')
def test_cases(request):
    """Test case generation processing :param Request: :return:"""
    var = request.config.getoption("--rootdir")
    test_file = request.config.getoption("--tf")
    env = request.config.getoption("--te")
    cases = []
    if test_file:
        cases = [test_file]
    else:
        if os.path.isdir(var):
            for root, dirs, files in os.walk(var):
                if re.match(r'\w+', root):
                    if files:
                        cases.extend([os.path.join(root, file) for file in files if file.endswith('yml')])

    data = main(cases)

    content = """Import allure from conftest import CaseMetaClass @allure. Feature ('{} interface Test ({} project)') class Test{}API(object, metaclass=CaseMetaClass): test_cases_data = {} """
    test_cases_files = []
    if os.path.isdir(var):
        for root, dirs, files in os.walk(var):
            if not ('. ' in root or '__' in root):
                if files:
                    case_name = os.path.basename(root)
                    project_name = os.path.basename(os.path.dirname(root))
                    test_case_file = os.path.join(root, 'test_{}.py'.format(case_name))
                    with open(test_case_file, 'w', encoding='utf-8') as fw:
                        fw.write(content.format(case_name, project_name, case_name.title(), data.get(root)))
                    test_cases_files.append(test_case_file)

    if test_file:
        temp = os.path.dirname(test_file)
        py_file = os.path.join(temp, 'test_{}.py'.format(os.path.basename(temp)))
    else:
        py_file = var

    pytest.main([
        '-v',
        py_file,
        '--alluredir'.'report'.'--te',
        env,
        '--capture'.'no'.'--disable-warnings',]),for file in test_cases_files:
        os.remove(file)

    return test_cases_files
Copy the code

The TestxxAPI class has only a test_cases_data attribute and no testXX methods. Therefore, it is not a pyTest approved test case and will not run at all. So how does it solve this problem? The answer is CaseMetaClass.

function_express = """ def {}(self, response, validata): with allure.step(response.pop('case_name')): validator(response,validata)"""


class CaseMetaClass(type) :"""Automatically generate test cases based on the results of interface calls"""

    def __new__(cls, name, bases, attrs):
        test_cases_data = attrs.pop('test_cases_data')
        for each in test_cases_data:
            api = each.pop('api')
            function_name = 'test' + api
            test_data = [tuple(x.values()) for x in each.get('responses')]
            function = gen_function(function_express.format(function_name),
                                    namespace={'validator': validator, 'allure': allure})
            # integration allure
            story_function = allure.story('{}'.format(api.replace('_'.'/'))) (function)
            attrs[function_name] = pytest.mark.parametrize('response,validata', test_data)(story_function)

        return super().__new__(cls, name, bases, attrs)
Copy the code

CaseMetaClass is a metaclass that reads the contents of the test_cases_data property and then dynamically generates method objects, each interface as a separate method, after being successively adorned with allure’s fine-grained test reporting capabilities and pyTest’s parameterized test capabilities. This method object is assigned to the test+ API class attribute, which means that testXX API has several testXX methods after it is generated, and PyTest is then run internally, so pyTest can discover and execute these use cases.

def gen_function(function_express, namespace={}):
    """Dynamically generates function objects, function scope set to builtins.__dict__ by default, and incorporates namespace variables :param function_express: function expression, example 'def foobar(): return"foobar"' :return: """
    builtins.__dict__.update(namespace)
    module_code = compile(function_express, ' '.'exec')
    function_code = [c for c in module_code.co_consts if isinstance(c, types.CodeType)][0]
    return types.FunctionType(function_code, builtins.__dict__)
Copy the code

Be aware of namespace issues when generating method objects. It is best to pass builtins.__dict__ by default, and then pass custom methods through the namespace argument.

Follow-up (yML test file automatically generated)

So far, the core functions of the framework have been completed, after several projects of practice, the effect is completely beyond expectations, writing use cases not too cool, running not too fast, the test report is also neat and neat, but I still feel a little tired, why? My current interface test process is that if the project integrates Swagger, the interface information can be obtained by swagger, and use cases can be manually created according to the interface information. This process is very repetitive and tedious, because our use-case template has been roughly fixed. In fact, some parameters such as directory, use-case name, method and so on are different between use-cases, so I think this process can be completely automated. Since Swagger has a web page, I can extract key information to automatically create a.yML test file, just like building a shelf. After the project shelf is generated, I can design use cases and fill in the parameters. So I tried to parse the HTML obtained by requesting Swagger’s home page, but was disappointed that there was no actual data. Later, I guessed that ajax was used. When I opened the browser console, I found the request of API-Docs, and it was json data, so the problem was simple and the webpage analysis was no longer needed.

import re
import os
import sys

from requests import Session

template =""" args: - {method} - {api} kwargs: - caseName: {caseName} {data_or_params}: {data} validator: - json: successed: True """


def auto_gen_cases(swagger_url, project_name):
    """Automatically generate yML test case template based on json data returned by Swagger: Param Swagger_URL: :param project_name: :return:"""
    res = Session().request('get', swagger_url).json()
    data = res.get('paths')

    workspace = os.getcwd()

    project_ = os.path.join(workspace, project_name)

    if not os.path.exists(project_):
        os.mkdir(project_)

    for k, v in data.items():
        pa_res = re.split(r'[/] +', k)
        dir, *file = pa_res[1:]

        if file:
            file = ' '.join([x.title() for x in file])
        else:
            file = dir

        file += '.yml'

        dirs = os.path.join(project_, dir)

        if not os.path.exists(dirs):
            os.mkdir(dirs)

        os.chdir(dirs)

        if len(v) > 1:
            v = {'post': v.get('post')}
        for _k, _v in v.items():
            method = _k
            api = k
            caseName = _v.get('description')
            data_or_params = 'params' if method == 'get' else 'data'
            parameters = _v.get('parameters')

            data_s = ' '
            try:
                for each in parameters:
                    data_s += each.get('name')
                    data_s += ': \n'
                    data_s += ' ' * 8
            except TypeError:
                data_s += '{}'

        file_ = os.path.join(dirs, file)

        with open(file_, 'w', encoding='utf-8') as fw:
            fw.write(template.format(
                method=method,
                api=api,
                caseName=caseName,
                data_or_params=data_or_params,
                data=data_s
            ))

        os.chdir(project_)
Copy the code

Now it is time to start the interface test coverage of a project. As long as the project integrates Swagger, the project shelf can be generated in seconds. Testers only need to concentrate on the design of interface test cases.

(Article from Hogwarts Testing Institute)

Welfare benefits:

Front-line Internet famous enterprises test development position internal promotion channel

Test development internal communication circle, expand your test network

For free: Interface testing + Performance testing + Automated testing + Test development + Test cases + resume template + test documentation