This article is shared by CODING R&D Director Zhenwei Wang at Tencent Cloud CIF Project Effectiveness Summit.

At the end of this article, you can visit the official website of the Summit to watch the replay and download the PPT.

Hi, my name is Zhenwei Wang, The R&d Director of CODING. I am very glad to share with you the thinking and upgrading of CODING products in the past period of time, and introduce the new blockbuster products after the upgrading of CODING strategy.

First, let’s take a look at CODING’s panoramic product matrix. Significant product updates of CODING have been made in the past year. A large number of products have been launched and some existing products have been upgraded. This chart covers almost all of the important product modules and functions of CODING. We focused on the management of software development stage, from team collaboration to project planning, from code collaboration to construction integration, from test management to product management and deployment and launch. After the launch of deployment, IaaS and PaaS were connected completely. In this way, it can be understood that if customers use CODING and cloud, Then a full link cloud native system can be formed.

Over the past year, CODING has continued to focus on its customers, making development easier, massively enhancing existing products and creating some very exciting new products for the future industry of cloud native. I will highlight some of these new products and major product upgrades in the following sections.

Project Collaboration 2.0 — Order

We all want things to happen at an orderly pace. As we know, software development is highly complex and requires the collaboration of many different types of professionals, such as product managers, back-end engineers, test engineers, front end engineers, Agile coaches, UI designers, etc., and different companies have different roles. But if you simply put these people together without a definite working communication mechanism, I think their collaboration will get messy.

CODING project collaboration for developer teams has always been a top priority in our products, and the Project Collaboration team has just completed a major 2.0 upgrade to keep collaboration organized. Here I’ll briefly introduce a few important features of Project Collaboration 2.0.

The first is the addition of custom collaboration modes. As we all know, project management software is overwhelming and there can be many choices, but traditional project management software is often based on a particular management theory, such as Agile or waterfall, and designed in depth for a specific scenario. In fact, this design approach obviously lacks flexibility. For most enterprises, internal affairs cannot be accurately classified as “needs”, “tasks” or “defects”.

For example, a financial firm may need to manage the concept of “risk”; For a start-up, there is no complete product demand for a new product, so it may need a concept called “conception” or “imagination”. For a software enterprise, there may be a special category of tasks called “delivery” or “implementation.” You can see, then, that there is no way for different enterprises to express their business models using the concepts in traditional project management software.

Whether it’s “risk,” “vision,” or “delivery,” Project Collaboration 2.0 can be very flexible. Using a custom collaboration pattern, you can not only jump out of the agile and waterfall collaboration model, but also visualize the concept of work items according to the actual business of the enterprise. Custom concepts like risk not only allow you to set regular attribute types like text, numbers, and checkboxes, but Project Collaboration 2.0 also supports a lot of internal information correlation. For example, you can set an imagination author to be a member of the project team, or even set the expected implementation cycle of the imagination to be several iterations. These custom concepts and custom attributes, combined with the custom process, a set of enterprise’s own project collaboration process will be completely established.

The status of items can also be customized. For example, enterprises can set any status for different types of work items, such as product review stage, design review stage, architecture review stage, development, tested, to go online, released and so on, according to their own situation. If a new concept is defined as “risk”, the “risk” can be set to different states: discovered, identified, evaluated, processed, eliminated, and so on. And when these concepts are transferred between different states, we can define their transfer rules and set that only a certain class of members can change a state. For example, only testers can transfer the “developing” task to the “tested” state, which brings great flexibility and convenience to the enterprise event flow process.

The second is the new project set capabilities that Project Collaboration 2.0 brings. A project set is used to manage the progress of cross-project matters outside the project level. As shown, requirements in a project set can be broken down into different projects on different teams and can be centrally managed and tracked in the project set view.

The picture below is a classic project set page information. It can be clearly seen that different from projects, project set has a clear time limit and several milestones will be set. The progress of the whole project set can be clearly seen here. The progress of tasks in this project set is also clearly recorded in different projects. From the perspective of the facilitator of the project set, such transparent information and clear schedule planning can effectively promote cross-team collaboration and effectively solve the problem of enterprise department wall.

As time is limited, the new features of Project Collaboration 2.0 will be introduced here, and other features and experiences will not be listed here. We believe that highly flexible attribute and process configuration, clear and intuitive information display, transparent rule flow setting can make the collaboration orderly. The full features of Project Collaboration 2.0 are already available in the CODING public Cloud (Coding.net). Customers and partners interested in going private can contact us.

Continuous deployment 2.0 — Handy

Publishing server software has always been a very difficult and complex issue, and there are a lot of approaches to this issue in the industry. From our perspective, this process is actually a key link between Dev and Ops in DevOps. But the reality is that the focus of developers and operators is different. Developers want fast delivery, while operators are more concerned with stability and reliability. DevOps is difficult to reconcile, but by reinventing the role of the operator to focus on operational facilities and the developer to take on business operations; Give developers the right to publish and view applications and key information about the target environment, and give developers responsibility for the stability of the corresponding business. This is how publishing really comes into its own.

With this goal in mind, CODING Continuous deployment 2.0 has been upgraded across the board, starting with the Application console. Taking development as the left and operation as the right, we hope that the power and responsibility for business operation and maintenance will be completely transferred to development, that is, operation and maintenance will move to the left. The concerns of developers and o&M personnel are different. Developers think of applications, while O&M personnel think of resources. Therefore, to make a function that supports developers to take over the o&M of application services, it must be application-centered.

From the application console, the corresponding development team can easily see the release history, environment list, monitoring status, and corresponding logging of the concerned application. Developers no longer need to consult o&M personnel to query logs, nor need o&M personnel’s help to add alarm rules, which enables O&M to focus on resources and facilities, while developers focus on applications.

Software that has been developed and tested needs to be released before it can truly serve users, and with that comes the risk of release. While the vast majority of releases are updates to the program itself, the reality is that a large number of releases are mixed with uncontrollable database changes and configuration item changes, which are often largely ignored. In the past, continuous deployment, almost all software and implementation teams focused only on changes to the application itself, resulting in the actual release scenario still requiring a lot of human intervention. CODING continuous deployment 2.0 puts forward the concept of multi-dimensional publishing, which can integrate the publishing of external services, such as programs, databases, and configuration items into an organic whole, taking over the publishing process and truly realizing the full automation of the publishing process.

Another important feature is that we implemented the GitOps concept. GitOps is a very popular operation and maintenance practice in the past two years. Since deployment is a part of operation and maintenance, it is natural that GitOps can also solve deployment problems. GitOps is based on Infrastructure as Code (IaC), which describes the Infrastructure of the target environment and the state of the program as Code and stores it in a Git repository. In this case, the change to the target environment is no longer directly operated by the operation and maintenance, but by modifying the code in the Git repository, and then the code is automatically synchronized with the target environment by the GitOps system. Kubernetes-based applications make it easy to practice GitOps because Kubernetes is designed to support declarative definition and end-state control, coupled with Git repositories defined by YAML files.

CODING currently provides a comprehensive capability to practice GitOps by managing IaC source files with a CODING repository, reviewing environment changes with CODING merge requests, and synchronizing source code with the environment with continuous deployment of CODING.

CODING Continuous deployment 2.0 is now available for trial applications from early users, who can visit the CIF Release page to learn about and experience the new products.

Nocalhost — Simplify

As mentioned earlier, server software distribution is complex, and cloud native, the concept of solving server software architecture problems, is also quite complex at this stage. This complexity extends beyond infrastructure and operations to the development and coding phase. Applications of microservices architecture are often composed of dozens or hundreds of microservices, which have a clear division of work between the program logic and the maintenance team. However, such a loose organization brings great challenges to the development of code self-testing.

CODING launched Nocalhost, an open source cloud native development environment, in late 2020. We hope that in the cloud native era, developers can make the cloud native microservice coding experience as raw and pure as the stand-alone application. The development experience of micro-services under the cloud is terrible. Developers are unable and unwilling to run the whole set of micro-services. Since there is no complete development and testing environment, self-testing and debugging cannot be carried out easily. Running all microservices consumes a lot of resources and is difficult to maintain. Localized operations are too different from the actual container environment, often leading to subsequent problems. This major upgrade of Nocalhost further simplifies the complexity of cloud native development coding from the aspects of debugging and environment preparation, making coding complicated and simple.

Developers who have used Nocalhost know that microservices currently being developed run in clusters of remote containers. To some extent, this greatly ensures the consistency of the self-testing environment and the final target environment, and greatly saves the local computing resources. But in this case, it is not easy to debug a process running in a remote container: the source code is on the local computer while the process is in the remote container, and debugging requires a complex set of network connections and IDE configurations. The Nocalhost update adds a one-click debug feature that lets developers click debug on the corresponding microservice and wait a few seconds before the IDE goes into debug mode.

As shown in the figure below, the left side is the operation effect of the target page, and the right side is the source of the microservice. The developer only needs to set a breakpoint on the corresponding line of code and trigger the request on the web side. When the breakpoint arrives, the developer can freely control the debugging process such as statement execution and variable calculation. Everything is fully resolved by Nocalhost.

Another painful part of the development experience is waiting for the service to start. For example, after writing a piece of code, click reboot and wait for the startup to complete before refreshing the page to see the results. The restart process for some services can be measured in minutes, and when you suddenly realize that you’ve misspelled a letter, you need to restart again and wait a few minutes to see the results, you can be devastated.

Different languages and frameworks provide real-time hot loading capabilities, but the native source code is so far removed from the processes in the cloud container that the real-time hot loading capabilities provided by these languages and frameworks are largely unavailable. Nocalhost again simplifies the complexity by eliminating the need for complex configuration and understanding of principles, allowing real-time hot loading to truly enhance the coding experience. The developer finds the corresponding service in Nocalhost, right-clicks remote launch, waits for it to start up once, and then happily writes code. All the developer has to do is write code, save the file, and refresh the page to see the results.

In the application of large-scale microservices architecture, there are two difficult contradictions in the development phase. The best way to keep developers happy coding is to provide a complete application development environment for microservices architecture to every developer, which is computationally wasteful and expensive. If you want to save computing resources, you can only provide several sets of fixed development and testing environments for developers to share. However, when multiple people develop, debug and test in an environment at the same time, the environment will be chaotic, and the development self-testing rhythm of developers will be seriously disturbed, and the final efficiency will be greatly reduced.

Nocalhost’s major upgrade enables a logical environment built on top of the base environment, where microservice components are logically partitioned into virtual environments, saving resources and isolating different developers from each other. As shown in the figure below, suppose an application is made up of ABCDE five microservices. We run these microservices and keep their functionality and versions stable, set up as base Spaces. If A developer wants to develop A service and doesn’t care about the BCDE four services, he or she can be given A logical space, i.e. A dedicated development space, which contains only the A1 service for development testing and reuses the BCDE from the base space. If another developer also needs to develop A service, you can create an A2 service for that developer and create A dedicated development space with other BCDes. Another developer or group needs to develop both C and D services, so just share ABE. At this pace, each developer can have a complete set of five microservices, rather than each actually running a full set of services. The principle of this mechanism is complex and not easy to implement, but Nocalhost makes it simple. All you need to do is specify the base environment and specify the microservice components you need in your development space.

As you can see in the figure below, normal traffic is marked blue and is processed by the underlying space service and returned to the caller. The traffic exclusively owned by the developer (Xiaoming) will be marked green, transferred to the development space exclusively owned by Xiaoming, and returned to the caller after processing. This dyeing and traffic scheduling is implemented with the help of Tracing and Service Mesh, but can be set up by the user without knowing the details.

Nocalhost is a fully open source product and plans to donate to a leading open source foundation this year to promote the industry as a whole. You can check out the documentation on the Nocalhost website (nocalhost.dev) for all of the functions described above.

R&d metrics – clear and clear

The world we live in is experiencing a huge wave of digitalization, from industry to agriculture, from research to production, from consumption to entertainment. Technology centered on computers and software is measuring the entire world with data. However, for the production process of software itself, the digital degree is not high. This is reflected in the software production process is not transparent, the stage progress can not be quantified, it is difficult to use numbers to attribute success or failure, difficult to implement bottleneck analysis. Software development is a project, and a mature project should not be what it is now. We believe that a mature project should be measurable, quantifiable, traceable and analyzable. If the data in the software development process is not clear enough, it will lead to the above problems and ultimately difficult to manage.

CODING has launched a new R&D measurement product to make r&d data clear and clear.

CODING focuses on the management and efficiency improvement of software development process. In order to make the data in the development process clear, it must collect data in the whole link. We classified and collected key data indicators from five stages of work, development and coding, test and verification, construction and integration, release and deployment, and designed key data items based on a large number of industry research and case analysis. The key item data capture of the whole link ensures that the whole measurement system can grasp the key points, and the highly open scalability ensures the depth extension of single item.

Simplification has always been an important feature of CODING. There are dozens or hundreds of data items in the five stages, and they are divided into multiple dimensions such as project, time window and personnel. These data are very complicated, but the R&D metrics can be configured in a minimalist way, and most of the data can be generated in a recommended way with one click. As shown in the figure below, for issues such as event planning and manpower layout, THE R & D measurement also provides a human-oriented manpower saturation view. In the case of multi-person cooperation, multi-task parallel, and mixed routine tasks and emergent tasks, the work arrangement plan of each staff can be clearly understood. And building this capability must require comprehensive data collection.

Based on a large number of data and case studies, we have concluded a maturity model of the digital depth of the R&D process and team r&d effectiveness, which is directly built into the CODING R&D metrics. For users, by paying attention to these key indicators, they can understand their own efficiency maturity status, learn from each other’s strengths, optimize their own processes and capabilities, and finally achieve twice the result with half the effort.

Currently, R&D Metrics has fully launched CODING public cloud, which users can experience immediately. Customers and partners with privatization concerns can contact us.

Compass — Flowing

After analyzing the vast amount of r&d process data collected by R&D metrics, we have been wondering what is the ultimate form of software engineering? We used to make analogies between software engineering and construction engineering, between software engineering and intelligent manufacturing, between software engineering and scientific research, and we widely accepted ideas like “The Beauty of Programming” that regarded programming as an art. Our conclusion is that they are somewhat similar, but not exactly the same.

After all, software engineering is still a project and should have the key characteristics of the project, such as process, schedule, quality, risk, etc. Then superimpose software features, such as parallel collaboration and iterative updates, on an engineering basis.

Every software development team has its own special circumstances, but they all want teams to have disciplined processes and smooth collaboration. CODING has launched Compass, which fully integrates CODING’s insights into software engineering, process specification best practices, and core value delivery. We believe that software engineering needs rules and specifications, and the setting of rules and specifications not only reflects the optimal state at a single point, but also must be efficient from the global perspective. Compass is the direction of software engineering, so that research and development flow smoothly.

As an answer to our ultimate thinking about software engineering, we named this specification implementation and value delivery stream across the full link: Compass. Compass means Compass or Compass, and we hope that software developers can be guided by Compass instead of relying on colleagues or leaders to arrange their work. A well-run software development process is one in which the system tells people what to do based on norms and procedures, rather than blindly looking for directions to work.

Process is at the heart of Compass. One of the first tasks for a development team to establish effective development practices is to identify the process by which the team develops and delivers software. Compass’s process engine is highly flexible, orchestrating everything from project work items, to code branch merging rules, to source quality specifications, to build test redlines, to artifact storage structures, and finally to the release delivery process.

Compass sets up a definition of a whole r&d link process that, once defined, makes it very easy to force members of the R&D team to follow the process and comply with the specification. ** We believe that a clear process gives clear expectations and results in structured, precise measurements. ** For the R&D team, each role is given a Todo List, which is automatically calculated by Compass based on the execution nodes of the process. Each member is required to continuously complete the Items in the Todo List. The whole process would flow automatically.

For example, if a developer wakes up in the morning and turns on his computer, Compass’s Todo List clearly states that he has three coding tasks, two code reviews, and one release approval confirmation to deal with. He can fully understand the upstream basic condition of each task according to the system information, and can also understand the impact of task completion time on the downstream according to the process calculation. Such clear work guidance, as if there is always a compass in guiding the direction of work, all these things are driven by the system fully automatic forward. On this basis, it can be understood that the software production process is almost transformed into a workshop assembly line of manufacturing industry. The only thing that assembly line workers have to do is to wait for the output of the previous step and process it themselves, and finally deliver it to the downstream for further processing.

Because the process drives people to promote the progress of the transaction, the start and completion of each stage are recorded by the system, and all data will be reported to the r&d measurement system, so that managers can easily conduct bottleneck analysis. For example, if you find that you have spent too much time on testing in the past month, you can analyze it carefully to see whether it is a lack of manpower, outdated testing tools, or slack personnel.

Structured, streamline, digitization and standardization of r&d process produces a large number of actual data, and these data, in turn, is used to optimize development rules and process Settings, form a two-way positive cycle, the process and specification itself like product iteration to enterprise’s r&d efficiency can truly move slowly, gradually improve. In this sense Compass extends the agile iterative approach from product development of software to management systems.

Because each specific matter has specific norms and standards, it can be very intuitive and convenient to restrain the behavior of operators, which provides a strong guarantee for the implementation of technical practices in the whole enterprise by core technical organizations such as CTO, CIO, technical committee and architecture support department. For example, we all know that there are many branch model scenarios in Git repository, but Compass can specify branch naming rules, merge process, merge permissions, pre-check, post notification, etc. And so on, we want to bring discipline to all the key parts of the development process, and eventually achieve a stepping-by-the-book effect, where the technical management of the CTO is moved from being an inculcation to a discipline of the system.

Compass is the ultimate CODING solution in software engineering. We want to make software development a flowing factory line. The net effect is that the machine pushes the man, rather than the man pushing the machine. Compass overrides all other product modules in the CODING system, making it a global and very large product system. Compass’s core process engine has been built. You can visit the CIF Conference release page to learn more about Compass features and application scenarios and apply for a trial.

And there’s more…

CODING has achieved numerous product iterations in the past year. Due to the limited time, I will not repeat them all. Here are some key updates

  1. CODING will soon launch a CI engine developed by Tencent to solve the inconvenience caused by Jenkins’ long-term dependence. This engine has been used within Tencent Group for many years and is proven, powerful and flexible to use. The new engine will cooperate with Tencent cloud security container to achieve faster scheduling and more flexible scheduling capabilities. The new CODING CI engine is now available for trial by early users, who can enter the CIF release page to learn about and experience the new products.

  2. CODING released a independently deployed artifact library, WePack, in 2020. At present, WePack fully integrates the security capabilities of Tencent Group, and combines with the well-known security teams yunding Security Laboratory and Cohen Security Laboratory, greatly enhancing WePack’s capabilities in product scanning, security reinforcement and other aspects. WePack can not only use the industry’s open vulnerability database scanning products, such as NVD, CNVD, etc., which are well known to all, but also has the ability of Tencent security team for more than 20 years. The ability of independent control and in-depth security has been recognized by many financial customers in the past year. WePack can be easily deployed privately in the customer’s environment, and more information can be entered into the CIF blockbuster release page to learn about new experience.

  3. CODING based on Git code base enhances directory-based read and write permission control. Git can’t control the access permissions at the directory level. If you are familiar with the principle of Git, you will know that Git is based on the hashing algorithm of the entire code base for version verification. If the checked files are not complete, the verification will not be completed. This rationale prevents Git itself from implementing directory-by-directory read permission control. Based on the principle, CODING extends Git. On the basis of being compatible with existing Git usage and not invading Git, CODING realizes permission control by directory by extending Git. This capability is currently available for early adopters, please contact us to apply for a trial.

  4. The Cloud Studio team has significantly improved the IDE Cloud Studio experience on the Cloud. Cloud Studio is now more convenient, faster, and more convenient than native ides. Developers don’t need to install any software, just open a browser, log in to their account and start programming. From opening a workspace on the cloud, it can be loaded in just 3 seconds until the workspace is fully available. The new version of Cloud Studio is now fully online, you can go to the official website (CloudStudio.net) to register and use, if you have private deployment requirements, please contact us.

CODING hopes to build a full-chain cloud native development system. We would like to thank our customers, partners and peers for their support and assistance. Currently, the cloud native development system is not complete, and CODING still has a long way to go. We look forward to the arrival of a comprehensive cloud native era in the future, and development will be easier!

Click to watch the replay of CIF Summit and experience the new CODING products in depth!