HttpRunner now supports HAR!!

If you haven’t seen these three exclamation points, you probably don’t know HAR yet.

What is HAR?

HAR is a universal standard published by the World Wide Web Consortium (W3C). Simply put, HAR is a convention JSON file format for documenting everything about an HTTP request interaction, including a detailed record of the request response and performance metrics.

Although the HAR standard is still in Draft, it has been widely adopted by the industry, and many of the tools we use everyday already support HAR. In the following list of tools, I believe that we have been more familiar with.

  • Fiddler
  • Charles Web Proxy
  • Google Chrome
  • Firebug
  • HttpWatch
  • Firefox
  • Internet Explorer 9
  • Microsoft Edge
  • Paw
  • Restlet Client

As you can see, the tool covers mainstream packet capture tools, browsers, and interface testing tools. These tools support THE HAR standard and export the recorded data packets as.har files.

If we can convert HAR to HttpRunner’s automated test cases, this means that HttpRunner can be used with a wide variety of tools, with interface recording and use-case generation capabilities, which greatly improves flexibility and ease of use.

So is it feasible to convert HAR format to an Automated test case of HttpRunner?

Let’s look at the format of HAR first.

HAR format details

Using any of the tools listed above, you can export the recorded packets as a.har file. When we open the.har file using a text editor, we will find a JSON data structure.

By default, the JSON data structure of the.har file is compressed and may not be intuitive. It is recommended that you install the plug-in for Prettify JSON in your text editor and convert compressed JSON data into a beautiful format with one click.

Better yet, we can look directly at the HAR format standard written by the W3C.

HAR is a JSON data structure with only one key, and the key value can only be log. The value of log is also a JSON structure, where keys include: Version, Creator, Browser, Pages, entries and comment.

{
    "log": {
        "version": ""."creator": {},
        "browser": {},
        "pages": []."entries": []."comment": ""}}Copy the code

Version, Creator and Entries are mandatory fields that will be included in any. Har file exported by any tool. When generating automated test cases, we only need to obtain the content of HTTP requests and responses, which are all contained in entries, so we only need to focus on the content of entries.

The value of the entries field is a column data structure that lists the details of HTTP requests, sorted by request time. Specifically, the HTTP request logs the following information:

"Entries ": [{"pageref": "page_0", "startedDateTime":" 2009-04-16T12:07:23.596z ", "time": 50, "request": [{"page ": "page_0", "startedDateTime": "2009-04-16t12:07:23.596z ", "time": 50, "request": {...}, "response" : {...}, "cache" : {...}, "timings" : {}, "serverIPAddress" : "10.0.0.1", "connection" : "52492", "comment": "" }, ]Copy the code

As you can see, the recorded HTTP information is very comprehensive, including all the content in the HTTP request interaction.

From the point of view of generating automated test cases, we don’t need that much information, we just need to extract the key information from it.

The most critical information for writing an automated test case is to know the interface’s request URL, request method, request headers, request data, and so on, which are contained in the dictionary corresponding to the Request field.

"Request" : {" method ":" GET ", "url" : "http://www.example.com/path/?param=value", "httpVersion" : "HTTP / 1.1", "cookies" : [], "headers": [], "queryString" : [], "postData" : {}, "headersSize" : 150, "bodySize" : 0, "comment" : "" }Copy the code

From this information, we can complete the construction of the HTTP request.

For us to automatically determine whether the interface responded correctly after the request was sent, we also need to set some assertions. All information related to the HTTP response is contained in the dictionary corresponding to the Response field.

"Response" : {" status ": 200," statusText ":" OK ", "httpVersion" : "HTTP / 1.1", "cookies" : [], "headers" : [], "content" : {}, "redirectURL": "", "headersSize" : 160, "bodySize" : 850, "comment" : "" }Copy the code

From the perspective of generality, we will judge whether the status code of the HTTP response is correct, which corresponds to the status field; If we want to have more judgment at the interface business level, we will also determine whether some key fields in the response content are as expected, which corresponds to the Content field.

"content": {
    "size": 33,
    "compression": 0,
    "mimeType": "text/html; charset=utf-8",
    "text": "\n",
    "comment": ""
}Copy the code

For the Content field, it can be a little more complicated because the interface can respond to content in a variety of formats.

For example, the response may be in the form of a text/ HTML page or application/ JSON. The specific type can be obtained by looking at mimeType, and the specific content is stored in the Text field.

In addition, sometimes the response data may be encoded, most commonly base64. We can obtain the specific encoding form according to the Encoding field, and then use the corresponding decoding method to decode the text, and finally obtain the original response content.

"content": {
    "size": 63,
    "mimeType": "application/json; charset=utf-8",
    "text": "eyJJc1N1Y2Nlc3MiOnRydWUsIkNvZGUiOjIwMCwiVmFsdWUiOnsiQmxuUmVzdWx0Ijp0cnVlfX0=",
    "encoding": "base64"
},Copy the code

For example, the above content, we can see that the encoding form is Base64 through encoding, and obtain the encoded content through text field. Then we can use the decoding function of Base64 to transform the original content.

>>> import base64
>>> base64.b64decode(text)
b'{"IsSuccess":true,"Code":200,"Value":{"BlnResult":true}}'Copy the code

At the same time, we can get the application/ JSON data type based on the mimeType, and then we can do json.loads to get the JSON data structure that the program can process.

From the above detailed introduction to HAR format, you can see that HAR format is very clear, and with a good understanding of it, it is easy to write a test case transformation tool.

har2case

There’s not much to say about the coding process, so let’s just look at the final product.

The resulting tool is Har2Case, a command-line tool that directly converts.har files into automated test cases in YAML or JSON format.

Har2case has been uploaded to PYPI and can be installed via PIP or easy_install.

$ pip install har2case
# or
$ easy_install har2caseCopy the code

It is easy to use by taking the HAR source file path and the target generated YAML/JSON path after the har2case command.

$ har2case tests/data/demo.har demo.yml
INFO:root:Generate YAML testset successfully: demo.yml

$ har2case tests/data/demo.har demo.json
INFO:root:Generate JSON testset successfully: demo.jsonCopy the code

Yml or. YAML generates YAML files, and. JSON generates JSON files.

If the target file is not specified, a JSON file with the same name and path as the. Har source file is generated by default.

$ har2case tests/data/demo.har
INFO:root:Generate JSON testset successfully: tests/data/demo.jsonCopy the code

You can run the har2case-h command to view the specific usage.

In most cases, the generated use cases can be used directly in HttpRunner. Of course, it’s up to you to do interface automation testing, interface performance testing, or continuous integration online monitoring.

However, if the recorded scenario contains dynamic correlation, in which subsequent interface request parameters depend on the response of the previous interface, and the parameters change dynamically each time the interface is called, then the generated script needs to be associated manually, even by writing some custom functions.

The follow-up plan

At this point, I’m sure you can understand the meaning of the three exclamation points at the beginning of this article, and I’m really excited to announce this new feature.

After a small range of practical use, the effect is very good, the interface automation test case writing efficiency has been greatly improved. Moreover, due to the openness of HAR itself, there are many choices left for users.

Even so, I think HttpRunner’s ease of use could be improved even more.

Currently, I have two new features planned for completion in the near future:

  • supportPostManWill:Postman Collection FormatFormat conversion toHttpRunnerTo support theYAML/JSONTest cases;
  • supportSwaggerWill:SwaggerDefines the API to convert toHttpRunnerTo support theYAML/JSONTest cases.

Once these two new features are complete, we expect HttpRunner to take it to the next level.

If you have any better ideas, please feel free to contact me.