ZStack’s system test system runs test cases in a real hardware environment. Like the integration test, the system test is fully automated and covers functional, stress, and performance tests.

An overview of the

While integrated test systems, as we describe in ZStack — Automated Test Systems 1: Integrated Testing, are powerful enough to expose most of the flaws in the development process, they also have inherent weaknesses. First, because the test cases use emulators, they cannot test real-world scenarios, such as creating a VM on a physical KVM host. Second, integration test cases focus on a simple scenario, in a simple artificial environment; For example, the use case for creating a VM might deploy a minimal environment, including a host and an L3 network, just to meet the requirements of creating a VM. These weaknesses, however, are well thought out, because we want developers to be able to write test cases quickly and easily as they develop new features, and this is a trade-off we have to take.

System testing, which aims to test the entire software, naturally complements integration testing in a real, complex environment. ZStack’s system test system is designed for two goals:

  1. Complex scenarios: These scenarios should be more complex than real-world usage scenarios to test the software’s limits. For example, the test cases for mounting and unmounting disks should be performed continuously and repeatedly on the virtual machine, in a way that is too fast for humans to do manually.
  2. Easy to write and maintain test cases: Like integrated test systems, system test systems take over most of the boring repetitive tasks, allowing testers to write test cases efficiently.

The system test system is a Python project named ZStack-Woodpecker and consists of the following three parts:

  1. Testing framework: A testing framework that manages all test cases and provides the necessary libraries and tools.
  2. Environment deployment tool: a tool for deploying an environment from an XML configuration file; It is very similar to the integration test System deployer.
  3. Modular test cases: Test cases are highly modular and cover: functional testing, performance testing, and stress testing.

The system test

Zstack-woodpecker was completely created by us; Before deciding to reinvent the wheel, we tried popular Python testing frameworks like Nose, and ultimately chose to create a new tool that would best meet our goals.

Suite configuration

Like all other testing frameworks, a test suite in ZStack-Woodpecker starts with Suite Setup and ends with Suite Teardown, where there are some test cases. Here suite Setup and Suite Teardown are two special test cases, Suite Setup is responsible for preparing the environment required by subsequent test cases, and Suite Teardown is responsible for cleaning up the environment after all test cases have finished. A typical test suite configuration file would look like:

<integrationTest>
    <suite name="basic test" setupCase="suite_setup.py" teardownCase="suite_teardown.py" parallel="8">
        <case timeout="120" repeat="10">test_create_vm.py</case>
        <case timeout="220">test_reboot_vm.py</case>
        <case timeout="200">test_add_volume.py</case>
        <case timeout="200">test_add_volume_reboot_vm.py</case>
        <case timeout="400">test_add_multi_volumes.py</case>
        <case timeout='600' repeat='2' noparallel='True'>resource/test_delete_l2.py</case>
    </suite>
</integrationTest>
Copy the code

The astute reader may have noticed that some parameters are not visible in other testing frameworks.

The first one is timeout; Each test case can define its own timeout, and if it cannot be completed within this time, it will be marked as a timeout in the final result.

The second is repeat, which allows you to specify in your test suite how many times the use case should be executed.

The third and killer parameter is parallel, which allows the tester to set the parallelism level of the suite; This is a key feature that makes zStack-Woodpecker run test cases very fast; In the above example, PARALLEL is set to 8, which means that up to eight use cases will be running at the same time; Not only does it speed up test cases, it also creates a complex scenario that simulates many users performing different tasks while sharing the same environment. However, not all use cases can be executed simultaneously; In our example, the use case test_delete_l2.py will delete the L2 network that other use cases depend on, so it cannot be executed when other use cases execute; This is where the fourth parameter noPARALLEL comes into play; Once it is set to true, the use case will be executed alone and no other use cases will be run at the same time.

Command line tool

Zstest. Py is a command line tool that helps testers control the testing framework, perform tasks like starting test suites, listing test cases, and so on. Zstest. Py provides a wealth of options to help testers simplify their work. Some of these options, which are particularly useful in our daily testing, are listed below. Testers can get available test cases with the -l option, for example:

./zstest.py -l
Copy the code

It should display the following results:

Test suite name, which is the name of the level 1 folder of the test case; For example, in the figure above you see a large number of use cases that start with BASIC (for example: BASIC /test_reboot_vm.py), and yes, basic is the name of the test suite. The tester can start a suite with the -s option, using either the full or part of the suite name, as long as it is unique, for example:./zstest. Py -s basic or

./zstest.py -s ba
Copy the code

Testers can optionally execute test cases by using their names or ids, already selected -c; Such as:

. / zstest. Py - 1, 6 cCopy the code
Or. / zstest. Py - c suite_setup test_add_volume. PyCopy the code

Remember, you’ll need to run the suite Setup use case: suite_setup.py as the first use case, unless you’ve already done so. Since a test suite will execute all test cases, clean up the environment, and issue a result report, testers may sometimes want to stop the test suite and hold the environment when a use case fails so they can delve into the failure results and debug; The -n and -s options are prepared for this; -n instructs the test framework not to clean up the environment, and -s asks to skip use cases that are not executed; Such as:

./zstest.py -s
virtualrouter -n -S
Copy the code

In addition, the -b option pulls the latest source code and builds a brand new zstack.war. This is particularly useful in Nightly tests, which are assumed to test the latest code:

./zstest.py -s virtualrouter -b
Copy the code

Once all the test cases have been completed, a report will be generated and printed to the screen:

The test framework will save all logs and directly print the absolute path to each failure log, if one exists. To record more detail in a general log, there is a special log, action Log, for each API call; Because this is a completely API pure log, we can easily find a root source of failure without being distracted by the log of the test framework. In addition, it is an important tool for automatically generating a new use case to reproduce failures, a magic weapon we use to debug failures in model-based tests where each use case randomly executes various apis. You can find details in ZStack, Automated Test System 3: Model-based Testing. The Action log snippet is as follows:

Environment deployment tools are similar to integration testing in that preparing the environment is a frequent and repetitive task for each test case; For example, a test case to create a virtual machine requires the configuration of separate resources, such as zones, clusters, hosts, and so on. Zstack-woodpecker calls zstack-cli, Zstack’s command-line tool, to deploy the test environment from an XML configuration file. For example: zstack-cli -d zstack-env.xml The FORMAT of the XML configuration file here is similar to that used for integration tests, and a fragment looks like this:

.<zones>
    <zone name="$zoneName" description="Test">
      <clusters>
        <cluster name="$clusterName" description="Test"
          hypervisorType="$clusterHypervisorType">
          <hosts>
            <host name="$hostName" description="Test" managementIp="$hostIp"
              username="$hostUsername" password="$hostPassword" />
          </hosts>
          <primaryStorageRef>$nfsPrimaryStorageName</primaryStorageRef>
          <l2NetworkRef>$l2PublicNetworkName</l2NetworkRef>
          <l2NetworkRef>$l2ManagementNetworkName</l2NetworkRef>
          <l2NetworkRef>$l2NoVlanNetworkName1</l2NetworkRef>
          <l2NetworkRef>$l2NoVlanNetworkName2</l2NetworkRef>
          <l2NetworkRef>$l2VlanNetworkName1</l2NetworkRef>
          <l2NetworkRef>$l2VlanNetworkName2</l2NetworkRef>
        </cluster>
      </clusters>.<l2Networks>
        <l2VlanNetwork name="$l2VlanNetworkName1" description="guest l2 vlan network"
          physicalInterface="$l2NetworkPhysicalInterface" vlan="$l2Vlan1">
          <l3Networks>
            <l3BasicNetwork name="$l3VlanNetworkName1" description = "guest test vlan network with DHCP DNS SNAT PortForwarding EIP and SecurityGroup" domain_name="$L3VlanNetworkDomainName1">
              <ipRange name="$vlanIpRangeName1" startIp="$vlanIpRangeStart1" endIp="$vlanIpRangeEnd1"
               gateway="$vlanIpRangeGateway1" netmask="$vlanIpRangeNetmask1"/>
              <dns>$DNSServer</dns>
              <networkService provider="VirtualRouter">
                <serviceType>DHCP</serviceType>
                <serviceType>DNS</serviceType>
                <serviceType>SNAT</serviceType>
                <serviceType>PortForwarding</serviceType>
                <serviceType>Eip</serviceType>
              </networkService>
              <networkService provider="SecurityGroup">
                <serviceType>SecurityGroup</serviceType>
              </networkService>
            </l3BasicNetwork>
          </l3Networks>
        </l2VlanNetwork>.Copy the code

The deployment tool is usually called by Suite Setup before running any use cases, and the tester can define variables in an XML configuration file by starting with the $symbol, which is then parsed in a separate configuration file. In this way, the XML configuration file works like a template, generating different environments. The following is an example of a configuration file:

TEST_ROOT=/usr/local/zstack/root/ zstackPath = $TEST_ROOT/sanitytest/zstack.war apachePath = $TEST_ROOT/apache-tomcat zstackPropertiesPath = $TEST_ROOT/sanitytest/conf/zstack.properties zstackTestAgentPkgPath = $TEST_ROOT sanitytest/zstacktestagent. Tar. Gz masterName = 192.168.0.201 DBUserName = root node2Name = centos5 node2Ip = 192.168.0.209 node2UserName = root node2Password = password node1Name = 192.168.0.201 node1Ip = 192.168.0.201 node1UserName = root node1Password = password instanceOfferingName_s = small-vm instanceOfferingMemory_s = 128M instanceOfferingCpuNum_s = 1 instanceOfferingCpuSpeed_s = 512 virtualRouterOfferingName_s = virtual-router-vm virtualRouterOfferingMemory_s = 512M virtualRouterOfferingCpuNum_s = 2 virtualRouterOfferingCpuSpeed_s = 512 sftpBackupStorageName = sftp sftpBackupStorageUrl = /export/backupStorage/sftp/ sftpBackupStorageUsername = root SftpBackupStoragePassword = password sftpBackupStorageHostname = 192.168.0.220Copy the code

Note: As you might guess, this tool can be used by administrators to deploy a cloud environment from an XML configuration file; Further, administrators do the opposite, writing a cloud environment to an XML configuration file through zstack-cli -d xml-file-name.

For performance and stress testing, environments typically require a large number of resources, such as 100 zones, 1000 clusters. To avoid manually repeating 1000 lines in the configuration file, we introduced a property duplication to help create duplicate resources. Such as:

.<zones>
      <zone name="$zoneName" description="10 same zones" duplication="100">
        <clusters>
          <cluster name="$clusterName_sim" description="10 same Simulator Clusters" duplication="10"
            hypervisorType="$clusterSimHypervisorType">
            <hosts>
              <host name="$hostName_sim" description="100 same simulator Test Host"
                managementIp="$hostIp_sim"
                cpuCapacity="$cpuCapacity" memoryCapacity="$memoryCapacity"
                duplication="100"/>
            </hosts>
            <primaryStorageRef>$simulatorPrimaryStorageName</primaryStorageRef>
            <l2NetworkRef>$l2PublicNetworkName</l2NetworkRef>
            <l2NetworkRef>$l2ManagementNetworkName</l2NetworkRef>
            <l2NetworkRef>$l2VlanNetworkName1</l2NetworkRef>
          </cluster>.Copy the code

Remark: this paragraph is not translated.

Modular test cases

Test cases can be highly modular in system testing. Each use case essentially performs the following three steps:

  1. Create the resource to be tested
  2. The verification results
  3. Clean up the environment

The Zstack-Woodpecker itself provides a complete library to help testers schedule these activities. The API is also nicely packaged in a library that is automatically generated from zStack source code. Testers do not need to write any native API calls. Checkers, used to validate test results, have also been created for each resource; For example, VM inspector, cloud disk inspector. Testers can easily invoke these inspectors to validate the resources they create without having to write tons of code. If the current inspector does not meet certain scenarios, testers can also create their own inspector and put it into the testing framework as a plug-in. A test case might look something like:

def test() :
    test_util.test_dsc('Create test vm and check')
    vm = test_stub.create_vlan_vm()
    test_util.test_dsc('Create volume and check')
    disk_offering = test_lib.lib_get_disk_offering_by_name(os.environ.get('rootDiskOfferingName'))
    volume_creation_option = test_util.VolumeOption()
    volume_creation_option.set_disk_offering_uuid(disk_offering.uuid)
    volume = test_stub.create_volume(volume_creation_option)
    volume.check()
    vm.check()
    test_util.test_dsc('Attach volume and check')
    volume.attach(vm)
    volume.check()
    test_util.test_dsc('Detach volume and check')
    volume.detach()
    volume.check()
    test_util.test_dsc('Delete volume and check')
    volume.delete()
    volume.check()
    vm.destroy()
    vm.check()
    test_util.test_pass('Create Data Volume for VM Test Success')
Copy the code

Like integration testing, testers can write a test case in just a dozen lines. Modularity not only helps simplify test case writing, it also builds a solid foundation for model-based testing, which we’ll discuss in more detail in the next article.

conclusion

In this article, we introduce our system test system. By performing more complex tests than real-world use cases, system testing can give us more confidence about how ZStack performs in real-world hardware environments. It allows us to quickly evolve into a mature product.