In this paper, starting from purpose of moored floating column: segmentfault.com/blog/camile

version The date of note
1.0 2019.3.21 The article first
1.1 2021.5.21 Modify the title:Back to automated testing -- what should we look out for when writing tests-> Tip: What should we look out for when writing tests

background

Recently, some bugs have been detected in the test phase of the project. When this situation first appeared, the author was very confused — at ordinary times, each feature code of ours was entered into the code warehouse with a large number of seemingly well-considered cases, but the fact still hit us in the face. So in this article, I’ll take a look at what I’ve learned recently and talk about what to look out for when writing tests.

AIR and BCDE principles

I read a book a while back that outlined some principles of unit testing:

  • At a macro level, unit tests conform to AIR principles
  • At the micro level, the code level of unit tests conforms to BCDE principles

The principle of AIR

AIR stands for AIR, and so does unit testing. The existence and value of test cases may not be felt when business code is running online, but they are critical to ensuring code quality. New code should be synchronized to add test cases, and changes to code logic should be synchronized to ensure successful test case execution. AIR principles include:

  • A: Automatic
  • I: Independent
  • R: Repeatable

To briefly explain the three principles:

  • Unit tests should be fully automated. Test cases are often triggered to execute frequently, and execution must be fully automated to make sense.
  • If the output of a unit test requires human intervention, it must be substandard. Manual validation using methods such as system.out is not allowed in unit tests, but must be validated using assertions.
  • To ensure that unit tests are stable, reliable and easy to maintain, they need to be independent. Use cases are not allowed to call each other or rely on the order of execution.

The principle of BCDE

When writing unit test cases, BCDE principles need to be followed in order to ensure the delivery quality of the modules under test.

  • B: Border, boundary value test, including loop boundary, special value, special time point, data order, etc.
  • C: Correct, and get the expected result.
  • D: Design, in conjunction with Design documents, to write unit tests.
  • E: Error, the goal of unit testing is to prove that a program is faultless, not faultless. In order to find potential errors in code, we need to have some mandatory bad input (such as illegal data, abnormal flow, non-business allowed input, etc.) when writing test cases to get the expected bad results.

Practice principles in ZStack white-box integration testing

The principles mentioned earlier are based on unit testing, but can also be a valuable reference in ZStack’s white-box testing.

Poke the understanding ZStack white box integration test: segmentfault.com/a/119000001…

Since ZStack’s entire testing framework is also based on Junit extensions, it follows some of the AIR principles mentioned above. In addition to principle A, principles I and R are somewhat compromised:

  • I: If the previous test does not clean up the state, the next test will be affected
  • R: Based on the I mentioned above, repeatability is likely to be compromised

Of course, these problems indicate that the current code is buggy. But unit testing is not affected in this way — it detects bugs, and the AIR principles are guaranteed.

In this example, we will use the VmInstance creation API — APICreateVmInstacneMsg as the test object. If the reader is not very understanding of context, can simply take a look at this Case: OneVmBasicLifeCycleCase

Border Test && Error Test

Boundary testing is used to detect and verify what happens when code handles extreme cases. Error testing ensures that the ZStack behaves as expected in some of the wrong states.

So how do we write such tests? Let’s start with a brief overview of the process for creating a Vm:

  1. VmImageSelectBackupStorageFlow
  2. VmAllocateHostFlow
  3. VmAllocatePrimaryStorageFlow
  4. VmAllocateVolumeFlow
  5. VmAllocateNicFlow
  6. VmInstantiateResourcePreFlow
  7. VmCreateOnHypervisorFlow
  8. VmInstantiateResourcePostFlow

Each step can be divided into several small steps. Take VmAllocateHostFlow as an example:

We can see that there are several more flows in allocateHost, depending on the strategy. Due to the loosely-coupled architecture, extreme problems can be easily simulated during testing, such as:

  • A suitable BackupStorage cannot be found
  • HostCapacity enough
  • The Agent returns a reply that is different from the status of the management node at some point in time
  • .

Similarly, the above eight flows for creating VMS can easily simulate various boundary conditions and error conditions.

Correct Test && Design Test

Correctness testing should sound simple (such as calling an API and seeing if the result returns correctly), but if you put it into integration testing, you can extend some additional concerns. Again using the createVm example mentioned above, we see eight flows, and then possibly several subflows nested within them. As shown in the figure:

When writing correctness tests, we can consider the following additional points:

  1. Does the APIParam translate between flows as expected
  2. Pay attention to the services within the management node:
    • Whether the timing of calls between flows is as expected
    • Whether the business target state is as expected when flowing between flows
  3. Pay attention to services outside the management node:
    • Whether the request for the agent meets expectations
  4. Whether the target state of the resource is as expected after the API call

Test cases that are combined with documentation should be defined by the team’s testers. To be sure, this type of testing focuses more on the API(that is, the input and output) than on the internal state.