1. Introduction

Hello, I’m Anguo!

Recently, a friend left a message to me in the background, saying that he wrote an Api interface for uploading large files using Django. Now he wants to check the stability of the interface concurrency locally, and ask me if I have a good solution

This article will take a look at the complete process of Jmeter executing Python scripts concurrently, using file uploads as an example

2. Upload Python files

Uploading a large file consists of three steps:

  • Get file information and number of slices

  • Section slices and upload – API

  • File merge – API

  • Parameterize the file path

2-1 Obtain file information and number of slices

First, get the size of the file

The total number of segments is then obtained using the preset slice size

Finally, get the file name and MD5 value

import os import math import hashlib def get_file_md5(self, file_path): "" with open(file_path, 'rb') as f: data = f.read() return hashlib.md5(data).hexdigest() def get_filename(self, filepath): """ get the original filename """ # filename with suffix filename_with_suffix = os.path.basename(filepath) # filename filename = Split ('.')[0] # suffix = filename_with_suffix.split('.')[-1] return filename_with_suffix, filename, suffix def get_chunk_info(self, file_path): File_total_size = os.path.getsize(file_path) print(file_total_size) # total_chunks_num = Math.ceil (file_total_size/self.chunk_size) # filename(with suffix) filename = self.get_filename(file_path)[0] # Md5 value of the file file_md5 =  self.get_file_md5(file_path) return file_total_size, total_chunks_num, filename, file_md5Copy the code

2-2 Upload slices and sections

Using the total number of segments and the size of segments, slice the file and call the interface of uploading segmented files

import requests def do_chunk_and_upload(self, file_path): "" To segment the file, "" file_total_size, total_chunks_num, filename, File_md5 = self.get_chunk_info(file_path) # for index in range(total_chunks_num): Format (index + 1)) if index + 1 == total_chunks_num: partSize = file_total_size % chunk_size else: Chunk_id = index + 1 print(' start uploading file ') Print (" fragment id:", chunk_id," Self.__upload(offset, chunk_id, file_path, file_md5, filename, partSize, total_chunks_num) def __upload(self, offset, chunk_id, file_path, file_md5, filename, partSize, total): """ Upload files in batches """ URL = 'http://**/file/brust/upload' params = {'chunk': chunk_id, 'fileMD5': file_md5, 'fileName': filename, 'partSize': partSize, 'total': Current_file = open(file_path, 'rb') current_file.seek(offset) files = {'file': current_file.read(partSize)} resp = requests.post(url, params=params, files=files).text print(resp)Copy the code

2-3 Merge files

Finally, the interface of merging files is called to combine small files into large files

def merge_file(self, filepath): """ merge """ url = 'http://**/file/brust/merge' file_total_size, total_chunks_num, filename, file_md5 = self.get_chunk_info(filepath) payload = json.dumps( { "fileMD5": file_md5, "chunkTotal": total_chunks_num, "fileName": filename } ) print(payload) headers = { "Content-Type": "application/json" } resp = requests.post(url, headers=headers, data=payload).text print(resp)Copy the code

2-4 Parameterizing file paths

The file upload path is parameterized for concurrent execution

# fileupload.py ... if __name__ == '__main__': Argv [1] # chunk_size = 2 * 1024 * 1024 fileApi = fileApi (chunk_size) # chunk_upload Fileapi.do_chunk_and_upload (filepath) # merge fileapi.merge_file (filepath)Copy the code

3. Jmeter executes concurrently

Before creating concurrent processes using Jmeter, we need to write batch scripts

The batch script must be executed together with the file path

# cmd.bat

@echo off
set filepath=%1

python  C:\Users\xingag\Desktop\rpc_demo\fileupload.py %*
Copy the code

Then, create a new CSV file locally and write multiple file paths

# Prepare multiple file paths (CSV) C:\ Users\ xingag\ Desktop\ charles-proxy-4.6.1-win64.msi C:\ Users\ xingag\ Desktop\ v2.0.pdf C:\\Users\\xingag\\Desktop\\HBuilder1.zip C:\\Users\\xingag\\Desktop\\HBuilder2.zipCopy the code

From there, you can create concurrent processes using Jmeter

The complete steps are as follows:

  • To create a test plan, add a thread group

    The number of thread groups is the same as the number of files above

  • Add “Synchronization timer” to a thread group

    The value of Number of Simulated user groups in the synchronization timer must be the same as the preceding value

  • Add CSV data file Settings

    Point to the CSV data file prepared above, set the file format to UTF-8, variable name to file_path, and finally set the thread sharing mode to “Current Thread Group”.

  • Add a debug sampler for easy debugging

  • Add OS process sampler

    Select the batch file created above and set the command line parameter to “${file_path}”

  • Add the number of view results

4. The last

Run the Jmeter concurrency process created above, and you can see the result of the concurrent file upload in the result count

Of course, we can increase the concurrency to simulate real usage scenarios by modifying the CSV data source and Jmeter parameters

If you think the article is good, please like, share, leave a message, because this will be my continuous output of more high-quality articles the strongest power!