First, automated testing framework

In the eyes of most testers, just touching the “frame” feels very mysterious and distant. The reason why you think it’s complicated is because it’s very complicated to use; The business processes of each company, business and product line are different, which leads to a lot of instability when “automated testing framework” is used to complete automated testing, which makes it difficult to be positioned as a fixed framework. In fact, the real automated testing framework is not a pattern, but a collection of ideas and methods, colloquially speaking, is a framework.

Second, the idea of automatic testing framework

In order to better understand the framework of automated testing, we start from the development of automated testing; General testing work is limited to more than 3 years and contact with automated testing should have a certain understanding of the following concepts of automated testing framework:

  • Modularity idea

  • Library thoughts

  • Data-driven thinking

  • Keywords drive ideas

The above only represents an idea for automated testing and is not defined as a framework. As mentioned above framework = thought + method, the following five frameworks have evolved:

1. Modular test script framework

You need to create small, independent scripts that can be described as modules, fragments, and applications under test. Together, these small, tree-structured scripts form scripts that can be used for a particular test case.

2. Test the library framework

It is similar to the modular test scripting framework and has the same advantages. The difference is that the test library framework breaks down the application under test into procedures and functions rather than scripts. The framework requires the creation of library files that describe modules, fragments, and the application under test.

3, keyword driven or table driven test framework

The framework requires the development of tables and keywords. These tables and keywords are independent of the test automation tools that execute them, and can be used to “drive” the test script code for the application and data under test, with keyword-driven tests looking much like manual test cases. In a keyword driven test, the functionality of the application under test is written to a table along with the execution steps of each test.

This testing framework can produce a large number of test cases with very little code. The same code is reused as data tables are used to generate individual test cases.

4. Data-driven testing framework

The input and output data tested here are read from data files (datapool, ODBC sources, CSV files, EXCEL files, Json files, Yaml files, ADO objects, etc.) and loaded into variables with code scripts generated by the capture tool or manually generated. In this framework, variables are used to store not only input values but also output validation values. Throughout the program, test scripts read numerical files to record test status and information. This is similar to table-driven testing, where the test cases are contained in data files rather than in scripts, which are simply “drivers” or transport mechanisms for data. However, data-driven testing is different from table-driven testing, even though the navigation data is not contained in the table structure. In data-driven testing, only test data is contained in the data file.

Hybrid test automation framework

The most common implementation framework is a combination of all the techniques described above, taking their strengths and making up for their weaknesses. This hybrid testing framework has evolved from most frameworks over time and through several projects.

3. Framework strategy of interface automation test

  1. The framework is designed directly for testers, and other testers simply add test cases to it; Therefore, our framework design must be simplified in three ways: simple operation, simple maintenance and simple extension.

  2. At the same time, the design framework must be combined with the business process, and not only rely on technology to achieve, in fact, technology is not difficult, difficult to understand and grasp the business process.

  3. The framework should be designed to encapsulate the basics into common ones, such as GET requests, POST requests, and assertions into common classes.

  4. Test cases are shared with code to facilitate use case management, so we chose the data-driven idea above.

Design of interface automation test framework

1. Before interface framework design, let’s take a look at some current mainstream interface automation tool frameworks

2. Features of the above tools

tool Learning costs The recording Continuous integration The test report Use case management The performance test Extend the difficulty Minimum requirements
Java+testng+Maven high no is is difficult is In the Java
Requests+Python low no is is difficult is In the Python
Robot Framework low no is is easy no high Tool components
HttpRunner low is is is easy is low Python

Python+Requests and HttpRunner were preferred. Let’s analyze the use-case execution process according to the two frameworks.

3. Use case parsing

Python’s Requests library has a unified interface for all HTTP request methods

requests.request(method, url, **kwargs)

Kwargs protects all possible HTTP request information, such as headers, cookies, Params, data, auth, etc. So, just follow the Requests parameter specification and reuse the concept of Requests parameters in your interface test cases. HttpRunner, on the other hand, simply reads the parameters in the test case and passes them to Requests.

1) Requests

def test_login(self):
 url = "www.xxx.com/api/users/login"
 data = {
 "name": "user1"."password": "123456"
 }
 resp = requests.post(url, json=data)
 self.assertEqual(200, resp.status_code)
 self.assertEqual(True, resp.json()["success"])
Copy the code

In this use case, the HTTP POST request is implemented, and then the response results are judged to check whether the response code, etc., meets the expectations.

There are two problems with such a use case in a real project:

  • Use case pattern is basically fixed, there will be a large number of similar or repeated use cases, use case maintenance is a big problem

  • The use cases are not separated from the executing code, nor is the parameter data, which is also difficult to maintain

2) HttpRunner processes test cases in JSON/YAML format. The separated test cases are described as follows

{
 "name": "test login"."request": {
 "url": "www.xxx.com/api/users/login"."method": "POST"."headers": {
 "content-type": "application/json"
 },
 "json": {
 "name": "user1"."password": "123456"}},"response": {
 "status_code": 200,
 "headers": {
 "Content-Type": "application/json"
 },
 "body": {
 "success": true."msg": "user login successfully."}}}Copy the code

3) HttpRunner use case execution engine

 
 def run_testcase(testcase):
 req_kwargs = testcase['request']try:
     url = req_kwargs.pop('url')
     method = req_kwargs.pop('method')
 except KeyError:
     raise exception.ParamsError("Params Error")

 resp_obj = requests.request(url=url, method=method, **req_kwargs)
 diff_content = utils.diff_response(resp_obj, testcase['response'])
 success = False if diff_content else True
 return success, diff_content
Copy the code

4) get HTTP interface request parameters from testcase, testcase[‘request’]

{
 "url": "www.xxx.com/api/users/login"."method": "POST"."headers": {
 "content-type": "application/json"
 },
 "json": {
 "name": "user1"."password": "123456"}}Copy the code

5) Initiate an Http request

requests.request(url=url, method=method, **req_kwargs)
Copy the code

6) Test results, that is, assertions

utils.diff_response(resp_obj, testcase['response'])
Copy the code

5. Interface automation testing framework landing

We use HttpRunner tool to design the framework according to the principle of ease of use and maintenance.

1. Introduction to HttpRunner

Main features:

  • It integrates all features of Requests to meet various testing requirements for HTTP and HTTPS

  • Test cases are separated from the code, and test scenarios are described in the form of YAML/JSON to ensure the maintainability of test cases

  • Test cases support parameterization and data-driven mechanisms

  • Realize interface recording and use case generation function based on HAR

  • Combined with the Locust framework, distributed performance testing can be implemented without additional effort

  • The execution mode adopts CLI call, which can be perfectly combined with Jenkins and other continuous integration tools

  • Statistical reports of test results are concise and clear, with detailed statistics and logging

  • It is extensible and easy to implement Web platform

#### 2. Environment preparationCopy the code

Install HomeBrew (MacOs package management tools like apt-get, yum)

  • The terminal execution
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Copy the code
  • Install PyEnv and configure environment variables: Python version manager to manage multiple Python versions at the same time (HttpRunner is developed based on Python, but supports Python3.6.0 and above)
brew install pyenv
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile
echo 'eval "$(pyenv init -)"' >> ~/.bash_profile
exec $SHELL -l
Copy the code
  • Install Python3.6
Pyenv install --list // Check the available Python versions. Pyenv install 3.6.0 // Install pyenvrehashPyenv pyEnv versions // View the installed Python version. The version marked with an asterisk (*) is the current versionCopy the code
  • Choose Pyhton
Pyenv Global 3.6.0 // Set the global version, that is, the current system version will be changed to 3.6.0Copy the code
  • Install and verify HttpRunner
PIP install Httprunner // Run the following command. If the version is displayed, the installation is successful: hrun -v 0.9.8Copy the code

HttpRunner has been built

Use case management

In HttpRunner, the test case engine supports the use case description in Yaml/Json format.

The advantages of writing and maintaining test cases in YAML/JSON format are obvious:

  • Compared with table form, it has more powerful flexibility and richer information carrying capacity;

  • Compared with the code form, unnecessary programming language syntax repetition is reduced, and the use case description form is unified to the maximum, and the maintainability of use cases is improved.

Yaml formats

Json format

The following is an example of the R&D platform in Sulan-Digital Platform 2.X (in Json format)

Scenario: After the project space is created, you need to quickly create Demo examples, that is, automatically create various directories and tasks.

1) Determine the interface used by the business process and debug it through Postman or Jmeter and classify it

  • Query (Get request) interfaces: Query task directories, resource groups, and workflows

  • New class (Post request) interface: Create a directory, create a task, etc

2) Determine the interface sequence according to the business process

  • To create a task in a directory, call the create directory interface first and then call the Create task interface

3) Fill in the interface information in the Json file according to rules

  • Interface Base_Url

  • The interface path

  • Interface request mode

  • Interface request parameters

  • Interface assertion

  • Interface return parameter (the parameter returned by the previous interface is used when associating the interface)

Here are some examples of use cases

4) After filling in the use case, execute the use case file, for example, the Json file is task.json

hrun task.json
Copy the code

5) View the running result

  • A Reports file is automatically generated in this directory. If you go into this folder, you can see the GENERATED HTML with time (executing once will generate an HTML file)

    )

  • Open this Html to view

All through

Partly by

  • Click Log to view specific request information and return information

  • Click trackBack to view location error messages

[About the author: Hyun Kong, 6 years of test-related work experience, used to be the search test leader of Wedoctor Group, responsible for server-side test, interface automation, continuous integration test and performance test and test development. Once participated in the government-enterprise ability improvement project of China Mobile.]