This article is an old column on “Continuous delivery Practices” by Professor Jiang Gangyi, TesterHome community expert and special consultant of Hogwarts Testing Institute. It is a classic reread for students who are interested in continuous delivery system and testing platform development.

By November 2017, we have basically completed the design and development of each link on Jenkins Pipeline assembly line. It includes compilation, deployment, code inspection, unit automation test, interface automation test, UI automation test, special automation test (APP special/performance/security, etc.), release and a series of links, and completes the combination of each built test platform and Jenkins Pipeline. Of course, in the internal application of the team, You need a constant drive.

This time Jenkins their limitations are exposed, Jenkins is a very good can solve the problem of each application delivery pipeline, but with the increase of business volume, the r&d team sharp expansion, research and development of each link, such as application management, library management, development and testing environment management, the release plan management, There are numerous pain points in post-release monitoring, quality and efficiency measurement of RESEARCH and development, public opinion failure management and so on. Any failure to follow up will seriously affect the quality and efficiency of research and development.

Below is the technical engineer daily work routine, project, building code base, application resources, branch code, alignment testing, and release to online, setting, point to point mass effect analysis and summary, and so on, these activities exist in different platforms, constantly repeat every day, need to keep communication with each team, keep doing research and development platform and technology stack switch, So we go back to the principle of continuous delivery, if there’s something that’s causing you pain, do it early and as often as possible. Tease out standardized gameplay, adopt efficient means of automation, use technology to solve these problems that make us feel headache.

This time we need to further think level from the test team up to the height of the whole research and development team, the entire product team, not just thinking about test engineer how to test, the more efficient to think about how should the whole team together, make all links in the research and development as the standardization, automation, transparent, Solve problems with technology rather than a bunch of management practices, thus forming a complete r&d loop. Thinking about quality outside the box of test execution, going left to ensure code quality in development and going right to ensure release quality in production, is what I understand is key to keeping test teams competitive in DevOps/ continuous delivery.

Three stages of a QA (EP) team

The following are just reflections from personal experience, which may not be very mature. There is no absolute sequence of the construction of each stage, but the effect of the later stage will strongly depend on the ability of the previous stage.

Google has evolved from QA team to EP team, and from business test team to quality performance platform development and test empowerment team. This process will take a long time (and may never be seen by many companies), but I believe it is the trend of the future.

  • Stage 1: Test automation, automatic test of code layer, interface layer, UI layer, APP special/security/performance and other special links, and build an automated test platform.
  • The second stage: delivery pipelining. All delivery links, including automated tests, are connected in series by means of pipeline to build a continuous delivery platform.
  • The third stage: intelligent R&D collaboration, in addition to pipeline pipeline, extends to code management, application management, resource management, release management, monitoring management, project management, r&d tool cloud and other R&D links, and provides a one-stop intelligent R&D collaboration platform to tap potential for any R&D links.

How do companies develop, test, and release technology templates

Application-centric R&D collaboration

It is very important to establish a standard application library at the company level, which is application-centric. It is the cornerstone of the whole R&D collaboration. Application information is naturally friendly to the technical staff, it can directly correspond to a service, a code base, an environment, a pipeline, a monitoring job, and a quality and efficiency data.

Series we need to rely on the application of this dimension, the whole process of r&d collaboration, code, resources, production line, monitoring, ops, fault, mass effect and so on are all around the application dimension in order to develop, development, testing, operational, security, and so on technical team also can be defined in their respective platforms up its application, and realize the seamless connection.

With the standard library, applications have a life and lifecycle management. An app is no longer just a dry code name or logo, but a collection of activities and a series of jobs. Whether it is new or stop, will affect a series of work links, of course, this process we need a variety of automation series.

Take adding and discontinuing applications as an example:

  • Application Added: Application Basic Information Submission -> Review by Line of Business Technical Director -> Code Recommendation and Initialization -> Resource Application
  • Application stop: Application stop -> Business line technical supervisor review -> operation and maintenance review -> automatic freezing of code base -> assembly line job recovery -> Automatic resource recovery -> Automatic shutdown of monitoring, etc

Containerized resource management

Development, testing, pre-launch, release, and a slightly larger Internet technology team will go through these stages before going online, and each stage corresponds to a set of environments. So we have at least a development environment, a test environment, a pre-release environment, a formal environment, and the configuration, software packages, resource types, and so on are not consistent until we use containers.

Product line several, hundreds of applications, each application concurrent N branches, Java, NodeJS, PHP, C++, Android, IOS, middleware and so on.

Under the background of microservices and branch development, the number of applications and branches is flooding, and the services are interdependent and coupled. The complexity and demand of resource management are increasing rapidly, and the difficulty is no less than or even more than that of online environment management. There is no sharp tool to take advantage of, poor environmental stability, will lead to the development and testing efficiency is very low, each application development work block each other, thus dragging down the whole team project development.

Fortunately, we have the container as a lifeline. Software package management, directory management, baseline changes, operation scripts and so on can be standardized through a single Dockfile. Through the distributed configuration center (such as SpringCloud’s Config, Baidu’s Disconfig, Ctrip’s Apollo, Taobao’s Diamond, etc.), we can achieve the configuration management of different environments, and basically achieve the standardization of the environment and the subsidence of operation and maintenance services. The idea of DevOps really took off.

One-click application for container application (including standard components of MySQL, Redis, MongDB, etc.) :

A particularly painful aspect of the R&D environment is the large number of parallel branches of development for each application. If the whole environment is mixed together, the interdependence and interference between the branches can be overwhelming, and if you have several independent development environments (such as general branch, feature branch, emergency branch, etc.), the amount of maintenance of multiple environments can also be overwhelming.

A good practice here is to separate stable and branch environments and isolate branch environments. The branch environment may be a development machine or a test server. Branches for applications with interdependent requirements can be isolated in one environment, while applications without strong requirements can be directly connected to the stable environment.

Because we use Dubbo for distributed services and K8S for container management, this actually involves quite a bit of modification of Dubbo and K8S itself.

Pipelining is the heart of DevOps

To achieve continuous delivery, the core lies in Pipeline, and to achieve Pipeline, the key lies in automated testing.

How to realize frequent delivery of software in short cycle through automatic assembly line and small batch flow of requirements has been written in the article of continuous delivery, which will not be described here. By developing the collaboration platform, we no longer rely entirely on Jenkins’ cookie-cutter interface and layout, and use Jenkins as the infrastructure rather than the operational management interface, which is the focus of the collaboration platform integration pipeline technology.

Another important point is that with Pipeline, we still need to have the management of release plan. Even in continuous delivery mode, publishing is a serious business that needs to be treated with great care. For example, in addition to Pipeline run results, CodeReview results, release Windows, manual inspection results, etc., all need to be automated and controlled on the R&D collaboration platform.

One-stop monitoring and management

Around the application, we have a multi-dimensional and three-dimensional monitoring system, including but not limited to:

  • Dial-up monitoring: is there a problem with the system?
  • Full link Monitoring: What’s wrong with the system?
  • Public opinion monitoring: What problems have users reported?
  • Resource monitoring: Is there a problem with the host network?

What we need to do here is aggregation. Scripted monitoring should be page-oriented, and page-oriented monitoring should be normalized. Through application screening, we can see the whole picture of the whole monitoring market, and establish a common response mechanism through unified application alarm Settings.

Take dial test monitoring as an example:

Measurement Management: DevOps dashboards

Peter Drucker: If you can’t measure it, you can’t improve it.

Should start with the end in doing things, published online is not the end of the project, but also the start of an iteration, establish a rapid and continuous feedback is particularly important, from right to do through the construction quality and efficiency of the kanban data, make the whole delivery process more transparent, exposed the bottleneck point and continuous improvement, which is the core of the measurement management significance.

List some common measurement points, which are expected to be systematically displayed in the dimension of application, rather than jumping on one internal platform after another, to manual collection.

  • Project schedule and risk market
  • Demand completion rate
  • Project timeliness
  • Code static analysis results
  • Pipeline execution frequency, duration and success rate
  • Release execution frequency, duration, and success rate
  • Monitor alarm frequency and trend
  • On-line fault statistics, etc

Internal cloud-based tool platform

More and more technical teams, more and more tool platforms such as development, testing, operation and maintenance, security and so on, each BU also has the impulse to innovate in various technical directions, which will inevitably lead to a large amount of repeated construction and waste of resources.

Therefore, we began to try the internal cloud tool platform, including the internal development tool chain platform, test tool chain platform, security tool chain platform, operation and maintenance tool chain platform, etc., to transfer the ability of various standardized technology platform to each business line team, and improve the overall quality and efficiency of the team.

Afterword.

There were too many holes to fill in the Pipeline to the R&D collaboration platform.

After a lengthy article, the numerous details in the middle actually need to solve the key problems on the technical stack. Up to now, I still feel that there are many places to be improved and many directions can be added.

To do at this stage is already far more than the traditional test range of things to do, we need to consider the overall architecture platform, need to consider the code design, interaction design, have to do yourself to do the artists, we don’t have a full-time front end (although I also have a lot of way to the front to resources, but more to the whole development of the stack), It needs to conduct joint adjustment and docking with large and small platforms inside the company, and it needs to be applied in various technical teams.

But it was all very valuable, and the results of the in-team application gave the entire development team (there were very few people involved full-time) confidence and motivation to grow quickly.

(Article from Hogwarts Testing Institute)

Welfare benefits:

Front-line Internet famous enterprises test development position internal promotion channel

Test development internal communication circle, expand your test network

For free: Interface testing + Performance testing + Automated testing + Test development + Test cases + resume template + test documentation