conclusion

Jmeter has notes on various types of interface tests, testing tools, and more

Course Content:

Performance testing concepts, performance metrics, the use of performance testing tools: test plans, thread groups, HTTP requests, listeners, CSV files, regular expressions, script recording third-party plug-in use, monitoring CPU, memory.Copy the code

Course Highlights:

Be familiar with the common functions and components of the tool and be able to explain them fluently and accuratelyCopy the code

Course difficulties:

Tool useCopy the code

The concept of performance testing is based on the protocol to simulate the user to send a request and form a certain load on the server to test whether the performance indicators of the server meet the requirements. Performance indicators concerns: time performance, space performance. What is a performance test? Based on the protocol, users are simulated to make requests and load the server to test whether the performance indicators of the server meet the requirements. Performance indicators Concerns: Time performance, space performance Performance tests are page-independent

Performance test definition: to test various performance indicators of a system by using automated testing tools to simulate various normal, peak and abnormal load conditions. 1. Benchmark test: When low pressure is applied to the system, check the operating status of the system and record the relevant data as a basic reference 2. Load test: continuously increase the pressure on the system or the duration under a certain pressure until one or more performance indicators of the system reach the safety critical value, for example, a resource has reached the saturation state. 3. Stress test: Stress test is to evaluate the operation of the system at or above the expected load, focusing on the handling capacity of the system at the peak load or beyond the maximum load. 4. Stability test (reliability test) : Make the system run for a period of time under the condition of loading certain business pressure to the system, so as to test whether the system is stable. 5. Concurrent test: test whether there are deadlocks or other performance problems when multiple users access the same application, module or data record at the same time, 1.3. Performance test index 1. One barber shop, 3 barbers, one customer in 30 minutes 2. At the same time, different number of customers will come into the shop and ask for a haircut. Considering the change of customers' hair-cutting time, the number of customers who enter the store at the same time in different numbers and the time spent to enjoy the service, the number of hair-cutting completed within one hour is 1.4. Number of users 1 Registered Number of registered users The number of registered users refers to the registered users in the software. These users are potential users of the system and may go online at any time. The significance of this metric is to give the test engineer an idea of the total amount of data in the system data and the maximum number of concurrent users on the system. 2 Number of Online Users The number of online users refers to the number of users who log in to the system at a specific time. The number of online users indicates the number of users who log in to the system. These users may not perform operations on the system and cause pressure on the server. 3 The number of concurrent users is different from the number of online users. The number of concurrent users refers to the number of online users who send requests to the server at a certain time. It is an important indicator to measure the concurrent capacity and synchronization coordination capability of the server. The number of users who send the same or different requests to the server at the same time, that is, the number of users who send the same or different requests to the server at the same time for the same or different services is limited to the same request for a service 1.5. Transaction response time a transaction refers to the set of operations performed by a user to perform one or more services on the client. The response time of a transaction measures the time taken by a user to perform these operations. In performance testing, the response time of a transaction is usually obtained by calculating the difference between the start time and end time of the transaction. A transaction represents a process of "sending a request from the user -> Web server receives the request and processes it -> Web Server obtains data from DB -> generating the user's object(page) and returning it to the user". General response time is for the transaction. 1.6. Hits per Second Hits per second is the number of HTTP requests submitted by web servers per second and is a common measure of server processing power. It is important to note that the corresponding time here is not a single mouse click, because the client may send multiple HTTP requests to the server during a single mouse click. Do not confuse this with the following. 1.7. Throughput Rate The throughput rate usually refers to the number of bytes returned from the server per unit of time, or the number of requests submitted by customers per unit of time. Throughput rate is an important indicator to measure the load capacity of a large Web system. Generally speaking, the larger the throughput rate is, the more data will be processed per unit of time, and the system will have a strong load capacity. Throughput is related to many factors, such as server hardware configuration, network bandwidth and topology, software technical architecture, etc. 1.8. Service Success Rate Indicates the success rate of multiple users initiating operations for a service. For example, to test the concurrent processing performance of the network booking system, in the morning 8:00-8:30 half hour peak, it is required to support 100,000 ratio booking business, among which the success rate is not less than 98%. That means 200 bookings were allowed to run out of time or for other reasons. TPS refers to the number of transactions processed by a server per second.TPS is a very important indicator to measure the processing capability of a system. In performance testing, the inflection point of system processing capability can be estimated by detecting the TPS of different users. 1.10. Resource Usage The resource usage refers to the resource usage. CPU usage 70%-80% Memory usage below 80% Network bandwidth usage 100 Mbit /s =12.5MB/s 2. Demand analysis of the test object: the commonly used core business, data volume, the concurrency value is bigger, such as registration, login, search, add a shopping cart, order, pay for performance indicators: 500 universal within 8 hours to complete test scenarios: a single scene, a single functional testing, mix scene, multi-functional combination of scenes, many business test 1.12. Test plan Test objective, test personnel arrangement, test schedule arrangement Press selection: configuration, requirements, quantity risk 1.13. Test solution Test tool: Jmeter. Loadrunner test environment: database, server, architecture design, try to be consistent with the production environment (online environment) if possible test strategy: single scenario, test a single function, mixed scenario, multi-functional combination scenario, multi-business test monitoring tool: Perfmon. exe 1.14 in Windows. Use case design 1.15. Test execution 1.16. Locate and analyze problems front end and back end: code, software, hardware networkCopy the code

2. Introduction to JMeter

Jmeter is an open source, free, multi-platform performance testing tool. The test plan is the starting point for testing with JMeter and is a container for other JMeter test components. 2. Thread group: Represents a number of concurrent users that can be used to simulate concurrent users sending requests. The actual request content is defined in the Sampler, which is contained by the thread group. You can set it Up by going to Test Plan -> Add -> Thread Groups, and in the thread Groups panel there are several input fields: number of threads, ramp-up Period(in seconds), and number of cycles, where the ramp-up Period(in seconds) indicates that all threads have been created within that time. If there are eight threads ramp-up = 200 seconds, then the thread start interval is 200/8=25 seconds, which has the advantage of not putting too much load on the server initially. Thread groups are designed to simulate concurrent loads. 3. Sampler: Simulate various requests. All the actual testing is done by the sampler, and there are multiple requests. For example, HTTP and FTP requests. 4. Monitor: responsible for collecting test results and being informed of the way the results are displayed. The function is to display and count some data (throughput, KB/S...) from the sampler request. And so on. 6. Assertion: Used to judge whether the result of the request response is as expected and correct. It can be used to isolate problem areas, that is, to perform stress tests while ensuring proper functionality. This limitation is very useful for effective testing. 7. Timer: Responsible for defining the delay interval between requests (threads) to simulate continuous requests to the server. Logic Controller: Behavior logic that allows JMeter to send requests to be customized and used in conjunction with Sampler to simulate complex sequences of requests. 8. The configuration component maintains the configuration information required by the Sampler and modifies the requested content according to actual needs. 9. The pre-processor and post-processor are responsible for completing the work before and after the request is generated. The front processor is often used to modify the Settings of the request, and the back processor is often used to process the data of the response.Copy the code

3. Use jMeter tool

What is a thread group

Process: a program that is executing the corresponding threads of a process: a process has multiple threads of execution thread group: according to the nature of the thread to the thread group three relationships: a process has multiple thread group, a thread group has multiple threads Concurrent execution: multiple threads to perform at the same time, the characteristic: the implementation of the end of the order is not consistent with the order of start Order: By default, threads in a thread group execute concurrently. Each thread executes HTTP requests within the group. Set the thread group execution order: Check the test plan (Run each thread group independently) 1.1.2. The thread group consists of three parameters: number of threads, ramp-up Period(in seconds), and number of cycles. 1.1.3. Number of threads: number of virtual users. A virtual user occupies a process or thread. Set the number of virtual users and in this case, set the number of threads. 1.1.4. Preparation Duration (s) : Indicates the time required for all the specified virtual users to start. If the number of threads is 20 and the preparation time is 10, it takes 10 seconds to start 20 threads. That's two threads per second. 1.1.5. Number of loops: the number of requests sent per thread. If the number of threads is 20 and the number of loops is 100, then each thread sends 100 requests. Total number of requests: 20 x 100=2000. If "always" is checked, all threads will continue to send requests and stop running the script as soon as possible. 1.1.6.. Scheduler: Set the start time and end time of the thread group. (When configuring the scheduler, you need to check the cycle times to be forever.) 1.1.7. Duration (s) : test duration, which overwrites end time 1.1.8. Start Delay (s) : test delay, which overwrites start time 1.1.9. Startup time: Test startup time, which is overridden by startup delay. When the startup time has passed, the current time will also overwrite it when manually testing only. 1.1.10. End time: the end time of the test, which is overridden by the duration.Copy the code

2. HTTP request

An HTTP request has a number of configuration parameters, as described below: Name: This attribute identifies a sampler. A meaningful name is recommended. Comment: Has no effect on the test, only user records user readable comment information. Server name or IP address: specifies the name or IP address of the server to which the HTTP request is sent. Port number: port number of the target server. Method: Used to send HTTP requests, including GET, POST, HEAD, PUT, OPTIONS, TRACE, and DELETE. Content encoding: Content encoding. The default value is ISO8859. Path: destination URL path (excluding server address and port)Copy the code

3. Listeners

Listeners collect test results and are told how the results will be displayed. Common examples include: aggregate reports, view result trees, view results in tables, and write result data to files. Just add the rest and see.Copy the code

4. Aggregate reports

Lable: for each HTTP request, the Name of the HTTP request is displayed. For example, Baidu HTTP request Name is Baidu Samples: Represents the total number of requests made in this test, as shown in the figure above, sougou and Baidu HTTP requests each send 30 requests. Average: The average response time, refers to all the requests of the average response time, above 30 request the total response time is divided by 30 to come up with the average response time, the default is a single request under the average response time, but when using the "controller", displayed a Median average response time in things: Line: response time of 90% users Min: minimum response time Max: maximum response time Error% : As shown in the figure above, in this test, 66.6% of sougou HTTP requests are in error, while baidu requests have no Throughput of error requests: Throughput, which is the number of requests completed per second by default, as shown above, 6.6 and 6.2 Recived KB/Sec per second respectively: Assertions Amount of data received from the server per second, checkpoints in kilobytes - Assertions Assertions can be used to determine whether the results of a request response are as expected by the user. It can be used to isolate problem areas, that is, to perform stress tests while ensuring proper functionality. This limitation is very useful for effective testing.Copy the code

5.Jmeter component parameterization ## (key of key points)

5.1. What is parameterized dynamic get and set the data 5.2. Why use parameterized perform batch operations, batch add batch delete, artificial efficiency is too low Using the program instead of manual to get and set data, safety and efficiency For example: a user name and password of the system under test parameters, to simulate multiple user login system 5.3 at the same time. CSV Data Set Config: Filename: indicates the path of the required Data file. CSV Data Set Config: indicates the path of the required Data file. If it is in the same path as the script, enter File encoding: The File encoding must be the same as the File. The default value is ANSI. Utf-8 Variable Names: Variable Names used to reference variables. Use commas (,) to separate each column in the data file. Delimiter: In a. TXT or. Dat file, use commas (,) or Tab (\t) to separate columns. Allow Quote data: Garbled characters occur when the "true" option is used to process full-corner characters. Recycle on EOF: Specifies whether to Recycle data at the end of a data file. When set to True, the number of threads is too high, and the data file will be read from the first line again when it reaches the last line. If the jmeter property csvDataset. Eofstring is set to False, the value defaults to <EOF> if the file continues to be read at the end of the file. This value can be changed by setting the Jmeter property csvDataset. Stop thread on EOF: If the value of Recycle thread on EOF is set to False and the value of Stop thread on EOF is set to True, the test is stopped after the last line of the data file is read, regardless of how many thread groups remain. Sharing Mode: Sharing mode. It is used in all thread groups by default. Optionally, each thread group can be opened separatelyCopy the code

6. Jemeter Script recording mode

BadBoy script recording The BadBoy script is recorded using the Jmeter proxy serverCopy the code

7. Regular expressions

When using Jmeter, I have encountered such a problem, which is the problem of user session association. As we know, when the website has login function, we will get a sessionID, which is unique and represents a unique session, and each login is a new value. When we used the Badboy recording script and imported it into Jmeter, the user session expired when executing the thread group. Because the session we used in Jmeter was a fixed value, the login failed. The way I deal with this is that when we log in, we get the session, and then we set up a post-processor -- the regex extractor to extract the session as a variable that we can use when we do other things. That's the perfect solution to the problem.Copy the code