Brief introduction:This paper will discuss a methodology that has not yet been tested: can we apply the theory of “growth hacking” to the improvement of the R&D process to achieve more reliable targeted performance optimization?

Buried points, as a conventional means of recording user behavior, have been around for a long time with the development of front-end technology. However, it was not until the emergence of the “growth hacker” series of theories that buried points analysis became rich in content and rules to follow. Like “growth” in the product world, “efficiency” has been a perennial theme in research and development. In the process of software development, along with the development of projects, a lot of behavioral data related to code base, pipeline and task will also be recorded in the form of events. Although the source of these data is not the same as the buried point of the page, its actual use has a lot of analogy. However, when product managers began to adjust marketing strategies by real-time data from buried sites, TL and PMs in the R&D team still had to use statistical methods + summary indicators as the leading post analysis method to review and evaluate the team performance after each version and iteration. And talked endlessly about how to shorten iterations from a month to two weeks to get “faster feedback.” This paper will discuss a methodology that has not yet been tested: can we apply the theory of “growth hacking” to the improvement of the R&D process to achieve more reliable targeted performance optimization?

First, the Polaris index of the R&D team



Before setting growth goals, teams often need to identify a North Star indicator that can be easily observed for the team’s success, such as sales, sign-on rates, active users, and so on. For the R&D team, the key metrics are requirements completion time, feature defect rate, customer satisfaction, and so on. Take “time to completion of requirements” as an example. This is a relatively objective and direct indicator of how quickly the development team responds to user requirements, i.e. the average length of time it takes for a requirement to be presented to be finally delivered and available.



Next, we define a relatively ideal requirements delivery process and represent it by referring to the “transformation funnel” structure of product flow analysis:



Corresponding, by adding all the requirements in the project, you can draw a “requirements delivery path diagram” like this (for example, the actual phase division should be richer) :



It’s a bit crude, but this presentation does allow us to recover a lot of information that was lost in the usual charts that only count the results. For example, the same two requirements that took 10 days to complete, one that took 7 days to develop, the other that took 1 day to develop, and the rest of the time was spent waiting for testing, their differences can be clearly identified in the delivery path diagram. Several other benefits of doing so include:

  • Even if a need has not been finally delivered, but has been blocked in some link or reworked, it can be shown on the path diagram in the form of abnormal traffic in the first time, thus attracting the attention of TL and PM in time.

  • Not only can we intuitively see the overall delivery progress of each stage, but we can also view each node flowing through from the perspective of a single requirement and find other requirements with similar conditions, so as to facilitate the analysis of their common characteristics.
  • For post-mortem analysis, you can do a backward traceability of the deliverables, such as looking at the proportion of requirements that were not delivered on time that flowed through the previous nodes.

Based on the above reference, we can get the “performance hacker” model (AARRR model corresponding to growth hacker) of the transformation of R&D demand value:



With Polaris metrics and a visual path, the next key is to use the data to guide performance improvements.

AB tests on the timeline

Not all customers are worth the effort to retain, and the growth team always prioritised the retention of high-value customers. Efficiency improvement should first identify differences and then teach students in accordance with their aptitude. Just like the “RFM model” customer classification method commonly used by the growth team, according to the research and development needs, the demand sets that can adopt different response measures can also be classified through the orthogonal dimension related to performance, such as the “RIW model” :

  • A (Activity) Recent Activity of demand (frequency of related events)
  • The degree of Importance of the requirement (priority, remaining time from plan completion)
  • The amount of development effort associated with the requirement (e.g., lines of code change)

The three dimensions can divide all the samples into 8 groups. This granularity is very suitable for delineating the key points while avoiding too much and excessive divergence of information. The above three groups of attributes were selected not only because of their high degree of discrimination, but also because the observed values of these indicators are easy to obtain and can be updated frequently, so that abnormal samples can be found and corrected in time during the research and development process. Software development is a mental work activities, factors influencing the development efficiency, including but not limited to a developer’s personal ability, team atmosphere, corporate culture, the project schedule pressure, between the members of the tacit understanding degree, external communication cost, the process tool, etc., most of them are not simple numerical measure of subjective elements. Although in the past, when we talked about R&D effectiveness, we focused on platform capabilities, R&D processes, tool support and other “short course, quick results” methods from the perspective of technology control, but the real world R&D effectiveness means are much more abundant. It can adopt technical engineering measures, such as improving the build speed, simplifying the on-line process and improving the release tools. Organizational culture can also be adopted, such as optimizing reward and punishment strategies, setting up advanced benchmarks, adjusting manpower structure, improving employee welfare, strengthening skills training, and so on. So what is the best way to improve the effectiveness of the research and development team? Growth theory has long provided the answer. Black and white cats are good cats as long as they catch mice: take the AB test.

Unlike AB testing for product users, it is not possible to conduct AB testing directly at the granularity of a single requirement (which is inconvenient for project management) during project development. In contrast, both team and iteration are more appropriate AB granularity. The specific AB method is familiar to everyone, such as having two project teams adopt different improvement strategies and comparing the results, similar to “pilot” and “model room”. Or having the same team try new ways of doing things in different iterations, and then keep them or drop them based on how well they work is called “AB testing on the timeline.” Well, a new concept has been created, but as the awake reader will soon discover, it’s not a new idea, and that’s what iterative retrospectives and improvement meetings are all about. Not really. Analytical data from previous iteration reviews were mainly iteration burn down charts and requirements/defect accumulation charts, which reflected the overall trend. The “overall mean” tended to cover up local issues, which did not meet the rigor requirements of “AB testing”. The “requirements transformation path” and “RIW distribution” mentioned above can just make up for the details of the previous iteration process and provide guidance for the method of efficiency improvement.

3. Limitations of loanism



In many ways, there are similarities between a growth strategy through burn-in analysis and a growth strategy through research and development event analysis, such as the four elements of burn-in:



These four elements of the R & D events, so that whenever the buried site available solutions, R & D events can be covered. This is loanism. Whereas growth is about a fixed set of users, it’s about getting new retention; In the face of ever-changing demands, the efficiency is in pursuit of timely delivery. Due to their different analysis objects and objectives, there are still differences in essence:

  • Users leave and come back and can be tracked continuously; As soon as the requirement is complete, it’s over, and the next time new requirements come in. This is also why requirements are not suitable for AB test granularity.
  • Retention and Retention can be as high as possible, with no upper limit; Delivering efficiency should not be overly demanding on one side, or the benefits of sacrificing quality and fatigued tactics for “efficiency gains” will outweigh the benefits.
  • The page path is relatively fixed, such as having to go through the single page before entering the payment page; The requirements path is not necessarily, for example a “development behind schedule” task may still end up being “delivered on time.”

In addition, the buried point record of the page click is always real-time and accurate, and the demand of the state depends on manual update, the actual operation may not be timely, the collected event data so there is often a time deviation, which is a research and development data analysis of a long-standing problem. More automation is one solution, such as in Aliyun · Yunefficiency’s new collaboration product, support for rules to link R&D activities with task updates (such as code submission triggering task start, pipeline release triggering task completion, etc.), which will greatly help to increase the accuracy of performance analysis. In the end, the effectiveness of even stereotyped borrowing needs to be proved by practice.

IV. Imagination and summary

Growth and efficiency, two seemingly disparate themes, have been linked by an imagination. Operate the technical team with product thinking, restore the R&D process with buried point data, and identify the key bottlenecks with transformation path. Effectiveness hacker makes the project progress more objective and the research and development process more transparent.

Copyright Notice:The content of this article is contributed by Aliyun real-name registered users, and the copyright belongs to the original author. Aliyun developer community does not own the copyright and does not bear the corresponding legal liability. For specific rules, please refer to User Service Agreement of Alibaba Cloud Developer Community and Guidance on Intellectual Property Protection of Alibaba Cloud Developer Community. If you find any suspected plagiarism in the community, fill in the infringement complaint form to report, once verified, the community will immediately delete the suspected infringing content.