HttpRunner uses documents. Httprunnermanage project has stopped maintenance, so what can be done? First copy the project from other people’s Github, for secondary development, attached to the Github address.

1. Run httprunner project locally without rabbitMQ:

A. Install mysql5.7 and create HttpRunner instance;

B. Install the RabbitMQ service (using the Erlang plugin).

2, in HttpRunnerManager/Settings. Py configuration changes, the library can be custom, can modify the rabbitmq listening address BROKER_URL = ‘it: / / guest: [email protected]: / / 5672

PIP install -r requirements. TXT is required to install PIP install -r requirements. TXT manually. There will be some problems, such as the older version of Django.

  • 1. You can add the -i parameter followed by the source address of the domestic mirror
  • PIP freeze > requirements. TXT to facilitate environment migration, execute in the project root directory ==> will output dependencies to the file save;

The above command has a flaw in exporting all python supported libraries, so the question is, what if I only want the libraries used in my current project?

Here comes the pipreqs tool: it can automatically discover which libraries are used by scanning the project directory, automatically generate a dependency list, and only generate project-specific dependencies to requirements.txt

Pip3 install Pipreqs failed to be installed, pip3 search pipreqs then pip3 install Pipreqs ==0.4.9

Pipreqs your_projectName (–force)

Open encoding=encoding (‘ utF-8 ‘);

–encoding= UTF-8, pipreqs. –encoding= UTf8 –force

  • PIP install -r t

www.lfd.uci.edu/~gohlke/pyt… Python37 /Script directory, PIP install install WHL

Django==2.2.5 # Install by downloading WHL files

PyYAML = = 5.1.2

Requests the = = 2.22.0

Eventlet = = 0.25.1

Mysqlclient = = 1.4.4

Django – celery = = 3.3.1

Flower = = 0.9.3

Dwebsocket = = 0.5.11

Paramiko = = server

HttpRunner = = 2.2.6

After the settings.py configuration and dependency installation is complete, generate database migration script and generate table structure, run in HttpRunnerManager directory:

Generate data migration scripts

python manage.py makemigrations ApiManager 

# apply to db to generate data table

python manage.py migrate 

5, create user management background super administrator:

Python manage. Py createsuperuser, http://127.0.0.1:8000/admin/ input step 6 to set the user name, password, login background operations management system, background management data

6, start service command:

Python manage.py runServer 0.0.0.0:8000 (this is the default address, do not need to be written),

http://127.0.0.1:8000/api/register/ registered users, enjoyed the platform!

Registered interface call times wrong: in ApiManager directory of the tasks. Py, views. Py files have the from httprunner import httprunner, it doesn’t exist, I didn’t start the normal use, first get wrong information to adjust if there’s any API specific reading the newspaper

Error: HttpRunner from HttpRunner. API import HttpRunner from HttpRunner

There is also a bug in the source file :main_hrun.delay applies the task method,

Note that this method takes main_hrun(testcase_dir_path, report_name) as an argument, so there is no delay method

In the ApiManager directory, views.py(view function) supports all interface requests of the page. I don’t know why there is a problem with the web, the page style display is not so beautiful, at the same time there is no JS trigger event

After downloading the source code deployment environment successfully, I ran the Web and found some style differences with the author’s interface.

Need to modify the HTML source code, modify the HttpRunnerManage/templates/base line 23 in the HTML file, need to modify, will be

Cdn.amazeui.org/amazeui/2.7…

Replace with

Cdn.bootcss.com/amazeui/2.7… Or the CDN. Clouddeep. Cn/amazeui / 1.0…

7, start the worker, if you choose to synchronous implementation and ensure that will not use to the periodic task, so this step can be ignored: http://localhost:5555/dashboard to view the task list and status

Celery -A HttpRunnerManager worker –loglevel=info # start worker

Celery beat –loglevel=info

Celery flower # Start tasks monitor background

Tips: The Web platform based on httprunner framework has been built!

8. Install the Locusts performance test platform supported by Python: PIP install locustio==0.11.0, prepare the interface data file in JSON format, generate locustfile through the locusts -f + JSON file path, and open http://127.0.0.1:8089/

Tips: Enabling multiple processes at the same time: locusts -f Testsuites/derate. yaml — processes 4

9. For the generation of json format use cases, we can obtain interface data through packet capture tool and export har format files, and then use the command har2case HAR file path,

Har2case generates test cases in JSON format by default, and generates test cases in YAML format with -2y

You can then use data files in Hrun JSON format to perform interface tests and generate HTML reports in the current directory

Tips: To learn how to use httprunner testing framework, use httprunner — startProject your_projectName

10, hRUN interface test, parameter association (parameters can only be passed from front to back), Support parameter extraction (extract) and parameter reference function ($var) in HttpRunner, local then export a har file with multiple interface requests, and generate JSON in the above way

“extract”:[{“token”:”content.content.token”}],

If you need to associate more than one dictionary element, append key-value to the list

If the [] array contains multiple {} data objects, use content.items.0. Name (number) to obtain the number

If you want to extract the LOGIN interface, you can execute it by executing two json/ YML files for each use case.

Click log to view the parameters of the request and the resulting response assertion

11, hrun –help to view httprunner basic usage:

–log-level LOG_LEVEL Specifies the log output level during execution. The default value is INFO. Specify logging level, default is INFO.

–log-file LOG_FILE can specify log output to the file Write logs to specified file path.

–dot-env-path DOT_ENV_PATH Specify only the environment variable file path Specify. Env file path, which is useful for keeping sensitive data.

This is useful when executing scripts in multiple environments, as shown in hrun testsuites/test.json –dot-env-path=prod.env

–report-template REPORT_TEMPLATE Specify report template path

–report-dir REPORT_DIR Specify report save directory specify report save directory

–report-file REPORT_FILE specifies that the report file < directory must exist, remember HTML end > is generated specify report file name.

— Failfast When a test case fails, it does not execute subsequent use cases. By default, it executes all use cases to Stop the test run on the first error or failure.

–save-tests Save loaded tests and parsed tests to JSON file.

— startProject startProject Specify new project name.

–validate [validate [validate…]] check whether the testcase format is correct. Multiple testcase format validate JSON testcase format is supported.

Prettify [prettify [prettify…]] Prettify JSON testcase format

Json/YAML use case validator assertion usage

Use skills

About the error

  • Hrun xxx.json –log-level debug (hrun xxx.json –log-level debug)
  • Json –log-level debug –log-file log_name.txt)

Note: on pit

1. If a testcase is executed in the /testcases directory, an error message with API Not found is reported.

Reason: The API and Suite contents need to be loaded before layering the case

Solution: when executing layered testcases, you must first CD them to the root directory, that is, the layer above the testcases directory: hrun testcases/ login interface. Json

2. If it involves layering script parameters, that is, running in multiple environments,

Use the DEMO

1, first register, then create the project, the use case is managed by the project dimension, project – module – use case, now to the new use case invalid. So the suspicion is that RabbitMQ is not deployed? It shouldn’t be!

This article uses the article synchronization assistant to synchronize