I. Safety testing

1. Security test method

The test means can be used for security testing. At present, the main security testing methods are:

1) Static code security testing

By scanning the source code and matching the data flow, control flow, semantics and other information with the unique software security rule base, we can find the potential security loopholes in the code.

Static source code security testing is a very useful way to identify all potential security risks during the coding phase so that developers can address potential security issues early on. Because of this, static code testing is better suited for early code development rather than testing.

2) Dynamic penetration testing

Penetration testing is also a common security testing method. Is the use of automated tools or artificial methods to simulate the input of hackers, the application system for attack testing, from which to find the running time of the existence of security loopholes.

The characteristics of this test is true and effective, generally find out the problem is correct, but also more serious. However, a fatal shortcoming of penetration testing is that the simulated test data can only reach a limited test point and the coverage is very low.

3) Program data scanning

For a software with high security requirements, data cannot be corrupted during operation, otherwise it will lead to buffer overflow type attacks. Data scanning is usually performed by memory testing, which can find many vulnerabilities, such as buffer overflows, that are difficult to find using other testing methods.

For example, when the software is running, the memory information is scanned to see whether there is some information that leads to hidden dangers. Of course, this needs special tools to verify, and it is more difficult to do manually.

2. Reverse the security testing process

Most software safety testing is carried out according to the defect space reverse design principle, that is, check in advance where there may be security risks, and then test for these possible risks.

Therefore, the reverse test process starts from the defect space, establishes the defect threat model, searches for the intrusion point through the threat model, and carries out the scan test on the known vulnerabilities at the intrusion point. The advantage is that known defects can be analyzed to avoid the existence of known types of defects in the software, but unknown attack means and methods are usually powerless.

1) Establish defect threat model

The establishment of defect threat model mainly starts from the known security vulnerabilities to check whether there are known vulnerabilities in the software. When establishing a threat model, it is necessary to determine which professional fields the software involves, and then conduct modeling according to the attack means encountered in each professional field.

2) Search for and scan for intrusion points

Check which defects in the threat model may occur in this software, and then put the possible threats into the intrusion point matrix for management. If a mature vulnerability scanning tool is available, it can be directly used to scan, and the found suspicious problems are incorporated into the intrusion point matrix for management.

3) Verification test of intrusion matrix

After the intrusion matrix is created, corresponding test cases can be designed for the specific entry of the intrusion matrix, and then test and verify.

3. Forward security test process

In order to avoid the test incompleteness brought by the principle of reverse design, a forward test method is needed to test the software completely, so that the tested software can prevent unknown attack means and methods.

1) Identify the test space first

All variable data in the test space is identified, with an emphasis on the external input layer because of the high cost of security testing.

For example, requirement analysis, outline design, detailed design and coding should identify the test space and establish the tracking matrix of the test space.

2) Precisely define the design space

Focus on whether the design space is clearly defined in the requirement and whether the data involved in the requirement identifies its legal value range.

In this step, the most important thing to pay attention to is precision. We should define the design space accurately in strict accordance with the principle of security.

3) Identify potential safety hazards

Identify which test Spaces and which transformation rules may have security risks based on the identified test Spaces and design Spaces and the transformation rules between them.

For example, the more complex the test space is, that is, the more complex the test space is divided or the more variable data composition relationships are less secure. And the more complex the conversion rules, the more likely there is to be a problem, which is a security risk.

4) Establish and verify the invasion matrix

After the identification of security risks is completed, the intrusion matrix can be established according to the identified security risks. List potential security risks, identify the variable data with potential security risks, and identify the level of security risks. For those variable data with high security risks, detailed test case design must be carried out.

4. Difference between forward and reverse Testing The forward testing process is based on finding defects and vulnerabilities in the test space.

The reverse testing process is based on the known defect space to find whether the same defect and vulnerability will occur in the software, both of which have their advantages and disadvantages.

1) Forward testing

The advantage of the process is that it is fully tested, but the work is relatively heavy. Therefore, for software with low security requirements, the reverse testing process is generally used to test; for software with high security requirements, the forward testing process should be the main process, and the reverse testing process should be supplemented.

2) Reverse testing

The main advantage of the process is low cost, as long as the known possible defects can be verified, but the disadvantage is that the test is not perfect, unable to cover the test space completely, unable to discover unknown attack means.

Common software security defects and vulnerabilities

Software security has many aspects. The main security problems are caused by software vulnerabilities. The following describes common software security defects and vulnerabilities.

1. Buffer overflow

Buffer overflows have become public enemy Number one in software security, and many practical security problems are related to them. There are two common causes of buffer overflow problems.

1) Verification of transformation rules in design space

That is, the lack of verification of measurable data causes illegal data not to be detected and discarded at the external input layer. After the illegal data enters the interface layer and implementation layer, because it goes beyond the scope of the corresponding test space or design space of the interface layer and implementation layer, it causes overflow.

2) Insufficient partial test space and design space

When legitimate data enters, due to the lack of corresponding test space or design space in the program implementation layer, overflow occurs in the process of the program.

2, encryption weakness

These encryption weaknesses are not secure:

1) Use insecure encryption algorithms. Encryption algorithm strength is not enough, some encryption algorithms can even be used to crack the exhaustive method.

2) When encrypting data, passwords are generated by pseudorandom algorithms, and the method of generating pseudorandom numbers has defects, making passwords easy to crack.

3) There are defects in the authentication algorithm.

4) Client and server clocks are not synchronized, giving attackers enough time to crack passwords or modify data.

5) Encrypted data is not signed, so that attackers can tamper with data. Therefore, when testing encryption, you must test for these possible encryption weaknesses.

3. Error handling

Generally, some error information is returned to the user during error processing. The returned error information may be used by malicious users to attack. By analyzing the returned error information, malicious users can know what to do next to make the attack successful.

If error handling calls some functionality that shouldn’t be there, the error handling process will be exploited. Error handling is a handling problem in the exception space, and the handling in the exception space should be as simple as possible. Using this principle can avoid this problem.

However, error handling is often a usability issue, and if the error handling prompt is too simple, users may not know what to do next. Therefore, security for error handling needs to be weighed against ease of use.

4. Excessive permissions If a user has too many permissions, malicious users who only have common user permissions may use the excessive permissions to perform security operations.

For example, a lack of restrictions on what can be manipulated may result in access to other resources beyond the specified scope. Security testing must test whether the application is using too many permissions, focusing on analyzing the permissions that should be available in various situations, and then checking to see if the given permissions are actually exceeded. In essence, the problem of too much authority belongs to the problem of too much design space, so it is necessary to control the design space well during the design to avoid the problem of too much authority caused by too much design space.

Three. Safety testing recommendations

Many software security testing experience tells us that the necessary conditions for good software security testing are:

1. Fully understand software security vulnerabilities

To evaluate the security of a software system, it is necessary to start from design, implementation and deployment. Let’s start by looking at how Common Criteria evaluates software system security.

First determine the Protection Profile (PP) corresponding to the software product. A PP defines a security feature template for a class of software products.

For example, PP of the database and firewall. Then, according to PP, specific security function requirements, such as user identity authentication implementation. Finally, determine the security object and how to meet the corresponding security function requirements. Therefore, the three links of a security software, none of the problems.

2. Evaluation of security testing After security testing, whether the software can achieve the expected degree of security? This is the biggest concern of security testers, so it is necessary to establish a post-test security evaluation mechanism. It is generally evaluated from the following two aspects.

1) Security defect data evaluation If it is found that the more security defects and vulnerabilities of the software, the more defects may be left. When conducting such assessments, baseline data must be established as a reference, otherwise correct conclusions cannot be drawn without evidence.

2) Vulnerability implantation method is adopted for evaluation

Bug placement is the same as fail-insert testing in reliability testing, except that it involves inserting security problems into software. In the method of vulnerability implantation, the specific personnel who do not participate in the security test are first required to implant a certain number of vulnerabilities in the software, and then the number of embedded vulnerabilities is found after the test, so as to evaluate whether the security test of the software is done adequately. 3) Adopt safety testing techniques and tools

Professional security scanning software with specific features can be used to find potential vulnerabilities, defects that have occurred can be incorporated into the defect library, and then automated testing methods can be used to bomb the automated defect library

For example, use software that simulates a variety of attacks.

Security testing is used to verify that the protection mechanisms integrated into the software can actually protect the system from illegal intrusion. As a popular saying goes: a software system must of course be able to withstand frontal attacks – but it must also be able to withstand attacks from both sides and behind.