Abstract: software architecture is the most front-end part of a system development life cycle, but also the most critical, core part. It determines what happens to subsequent code, it determines what happens to projects, and sometimes it can make or break a company.

Introduce a.

Structure is the future

Flexible design of software architecture

Iv. Comparative analysis of typical architectures

Credible discussion on servitization framework

Vi. Architectural design measurement and vision

Vii. Sense and understanding of the framework system

The last

What is the ** architecture? In simple terms, it is the structure of the shelf, such as the skeleton of animals and people, the structural framework of buildings, etc. What does architecture affect? Architecture affects the form and quality of the final item. The same goes for analogies to software systems, where the programmer acts as god in a binary world, creating different things for the world. Then, as a fundamental, the importance is self-evident, which requires us to think about how to reasonably carry out object design, architecture design.

The development of software technology field is also the evolution of system architecture, from single unit, distributed system, SOA architecture, to more fine-grained microsertization, service grid, cloud native architecture, etc. The architecture is designed to address the high concurrency, high performance, stability, ease of maintenance, and scalability unique to various scenarios. Architecture is multidimensional and can be described in many ways. Architecture design relies on human thinking and judgment. It is an abstract structure composed of various components of software and the dependencies between these components. Experienced designers can grasp more details of the architecture, the program architecture has a strong predictability, so that the design is more reasonable and sufficient.

Software system architecture has little influence on functional requirements, but the quality of architecture determines the non-functional requirements of the system, namely quality attributes or “capabilities” for short, determines the quality of the system, and determines the reliability, testability, maintainability, extensibility and deployability of software delivery. In fact, the functional requirements of an application can be met on any architecture, even a messy one. Therefore, the internal architecture of even successful systems can often be a big ball of mud. Unreasonable architectures often lead to tightly coupled systems, glass hearts, hard to change, no clue, even about the scale of the architecture. What is the performance of the system? Is the program easy to modify? What is the deployment model of the system? How does the system respond? Everything is so unanswerable, so inconclusive… …

A. The salt interface

Software architecture design can explain the composition and characteristics of the system from a macro point of view. It is the most reasonable decision under the constraints of existing resources after systematic thinking and balancing. Finally, the system framework includes subsystems, modules and components, as well as the cooperative relations, constraint norms and guiding principles among them. Covers requirements, technology stack, cost, organization and architecture, scalability, maintainability, etc.

  1. Systematic thinking and rational decisions: such as technology selection, solutions, etc.

  2. Define the system framework: define the components of the system;

  3. System collaboration: How components work together to fulfill business requests;

  4. Constraint specification and guiding principle: ensure the orderly, efficient and stable operation of the system.

Software architecture design should complete two tasks, one is analysis, the other is design. Analysis is to analyze requirements, and design is to design the general structure of software. Many methodologies separate analysis and design activities, but in fact, it is difficult to distinguish the two. When doing analysis, one will think about how to design, and thinking about how to design in turn will affect the effect of analysis. It can be said that the two are interrelated and iterative.

In the field of architecture design, business understanding and experience accumulation are equally important. When short-term needs conflict with the overall design, some designers will prefer the current design, lack of thinking about the overall and future, and corrupt the proper architecture of the system as a whole. When programmers are at loggerheads with product managers, this requirement can’t be done and that requirement can take a long time to adjust. It also reflects the unreasonable possibility of architectural design.

Structure is the future

Software architecture is the most front-end part of a system development life cycle, but also the most critical and core part. It determines what happens to subsequent code, it determines what happens to projects, and sometimes it can make or break a company. The goal of the architecture is to ensure that the system is highly available, scalable, scalable, secure and a series of indicators, well begun is half done.

Software architecture is the top-level structure of the system, which should be focused on the overall situation, including hardware, operating system, network environment, and all processes (requirements, design, coding, deployment, maintenance and iteration) from project initiation to maintenance. Architecture determines the relationship between subsystems, layering and communication, common design principles/styles, and priorities and trade-offs between functional and non-functional requirements. Architecture is a reflection of a variety of structures, like the structure of the building will be as the motivation and starting point of the different and has many meanings, software architecture is characterized by a variety of structure, common software structure are: the module structure, logic or concept structure, process, or coordination structure, physical structure, use structure, call structure, data flow and control flow, class structure, etc.

An excellent architecture can promote the benign development of the project. Taking both the distant and the near into consideration at the beginning of the design will eventually make the project develop in a good direction and reduce the time, money and labor costs invested in the project. On the bad side, there’s a good saying about changeable software that plans can’t keep up with change. Demand uncertainty is both human and business related. Architectural changes are often costly, and good architecture allows changes to occur locally without affecting the entire system. Scalability of architecture and design determines the cost of changing requirements and increasing business; Standardization, standardization and inheritance are conducive to improving team efficiency; “Sowing, fertilizing, weeding”, the cost of sowing will affect the cost of weeding, and this cost may be several times or even dozens of times, to do better after the initial can be more relaxed.

Architecture can be classified into business architecture, application architecture, technical architecture and deployment architecture. Business architecture is productivity, application architecture is relations of production, and technical architecture is tools of production. Business architecture determines application architecture, and application architecture needs to adapt to and evolve with business architecture, and finally be implemented based on technical architecture and deployment architecture.

1. Architectural design objectives

The purpose of architecture design is to solve the problems caused by the complexity of the software system, and to grasp the system’s high availability, high performance, scalability, security, scalability, simplicity and so on. In general, functional requirements determine the business architecture, non-functional requirements determine the technical architecture, and change cases determine the scope of the architecture. Functional requirements define what the software can do and design the business architecture based on business requirements so that future software can meet customer needs. Nonfunctional requirements define performance and efficiency constraints and rules that the technical architecture must meet. A change case is an estimate of possible future change that combines functional and nonfunctional requirements to determine the scope of a requirement and, in turn, the scope of an architecture.

Structural plane considerations

1) Compliance of requirements: correctness and integrity; Functional requirements, non-functional requirements;

2) Overall performance (memory, data organization and content, tasks, network operations, key algorithms, hardware and other interface impacts);

3) Operation manageability: easy to control system operation, monitor status, error handling, inter-module communication and maintainability;

4) Compatibility with other system interfaces;

5) Compatibility and performance with network and hardware interfaces;

6) System security;

7) System reliability;

8) Adjustment of business processes and information;

9) Convenience of use;

11) Consistency of architectural styles.

Note: Runtime load balancing can be considered from the aspects of system performance and reliability.

Architectural design objectives

Determine the system boundary, determine the system in the technical level do and do not do.

Determine the relationship between modules in the system, dependency relations and module macro input and output.

Identify the principles that guide subsequent design and evolution so that subsequent subsystem or module designs continue to evolve within a defined framework.

Identify non-functional requirements objectives, involving the following:

Reliability: The system is very important for the business operation and management of users, so it must be very reliable.

Security: the commercial value of the transaction undertaken by the software system is very high, and the security of the system is very important.

Scalability: maintain reasonable performance under the rapid growth of user usage and users, adapt to the market expansion possibilities of users, and include the expansion of system functions and performance.

Customizable: The same set of software can be adjusted according to different customer groups and changing market needs.

Maintainable: first, to eliminate existing errors, and second, to reflect new requirements into the existing system. An easy to maintain system can effectively reduce the cost of technical support.

Customer experience: Software systems must be easy to use.

Market timing: It’s important to get ahead of the curve as quickly as possible.

Organizational structure

1) Development manageability: easy division of labor, easy configuration management, reasonable size and moderate complexity;

2) Maintainability: different from operational manageability;

3) Scalability: upgrade, capacity expansion and performance expansion of system solutions;

4) Portability: different clients, application servers and database management systems;

5) Compliance of requirements.

2. Design thinking mode

The essence of architectural design is to manage complexity. Abstract, hierarchical, divide and conquer, and evolutionary thinking are the four fundamental means by which architects conquer complexity.

A. Abstract design thinking

Abstraction is the process of simplifying the representation or description of something. Abstraction allows us to focus on elements and hide additional details. The strength of abstraction directly determines the complexity and scale of the problems we can solve. Architecture should abandon the nuts and bolts and focus on the top, highest-priority, riskiest parts of the software.

Proper use of abstractions can improve the simplicity of design and the quality of software development. Two kinds of abstraction methods are commonly used, process-based abstraction and data-based abstraction. When doing process-based abstraction, the problem to be solved is decomposed into small sub-problems, and each sub-problem is completed by an independent module, function, class, etc. A well-abstracted system can still work without design changes if one part of the implementation is replaced. Well designed abstractions can be combined to build more powerful and complex systems. Data abstract-based approaches can separate the use of complex data structures from their construction. By using “abstract data”, users can access and manipulate them through a clearly defined series of interfaces, hiding the internal characteristics of objects and being transparent to the external environment. However, an abstraction that is not suitable for the nature of the problem will not only affect the simplicity of software design, but also negatively affect the maintainability of software.

B. Layered design patterns

Layering has a long history in computing. The advantage of layering is that the logic at the top does not need to know everything about the logic at the bottom, it only needs to know the details of its adjacent layer. TCP/IP protocol stack is to encapsulate data layer by layer through different layers, and the coupling degree between different layers is significantly reduced. The coupling degree between layers is greatly reduced by strict hierarchical differentiation. Layering principles divide software structurally, defining the responsibilities of different parts of the structure. The general idea is nothing special, just a way to organize the system effectively. After layering, the software architecture is clear.

To build a complex system, the entire system can be divided into several layers, with each layer focusing on a specific domain and providing services upward. Some layers are vertical and run through all other layers, called shared layers. Layering can also be thought of as a way of abstracting a system into a number of hierarchical modules.

C. Divide-and-conquer thinking of problems

Divide and conquer is also a general approach to dealing with and managing complexity. Divide a big problem that can’t be solved at once We divide it into separate problems. If we can’t solve the problem directly, we divide the problem into separate problems until we can solve the problem directly. Then combine the solutions of sub-problems into the solutions of sub-problems, and then combine the solutions of sub-problems into the solutions of the original problem, which is the process of combine.

D. Evolutionary architectural thinking

Architecture is both designed and evolved, evolving in design and designing in evolution, an iterative process. Architects should not only make use of their own architectural design capabilities, but also learn to leverage the power of user feedback and evolution to promote the continuous evolution of architecture. This is evolutionary architectural thinking. A system that can constantly respond to changes in the environment is a viable system, and the quality of its architecture depends in large part on its flexibility to respond to change. Therefore, architects with evolutionary thinking are able to consider the evolutionary characteristics of subsequent architectures at the beginning of their design, and the ability to flexibly respond to changes is a major consideration in architectural design.

Software architecture needs to change as the business evolves. Without a grasp of this essence, it’s easy to fall into the trap of trying to design an architecture all at once, expecting it to be rock solid no matter how the business changes. Business evolves and changes quickly, and in practice evolution is better than one step. In practice, the architecture should first be designed to meet the needs of the business at that time. Secondly, the architecture should be constantly iterated in the practical application process, retaining excellent designs, repairing defective designs, correcting wrong designs, and removing useless designs, so as to gradually improve the architecture. As the business changes, the architecture is extended, restructured, and even rewritten; Code may be rewritten, but valuable lessons, logic, design, etc., can be carried over into the new architecture. Strict treatment of each iteration, to ensure the completion of the plan, to ensure the quality of the software, to ensure that the needs of users are met, this is the orthodox way of iteration.

The initial design must be an original architecture, but it is important for the subsequent architecture design. Iterative design can also be called incremental design, each iteration is based on the previous iteration, iteration will be committed to reuse, modify, enhance the current architecture, to make the architecture stronger and stronger, a stable architecture.

3. Typical design practices

Software pattern is a concrete means to realize architecture design by defining a group of software elements that cooperate with each other to solve software architecture design problems. Architecture design needs to consider and balance comprehensively from a global perspective of computing, data, storage, communication, etc.

Architectural patterns help define the basic characteristics and behavior of a program. For example, the layer pattern is used for system architecture design, the agent pattern is used for distributed systems, and the MVC pattern is used for interactive systems. Patterns are inherently solutions to specific problems, and some architectural patterns naturally make applications scalable, while others make applications smart and agile. The architecture can be designed using a pattern based on the characteristics and goals of the requirements. Design pattern is a kind of summary extracted from the code level, which can make the coupling degree of the code to achieve the maximum separation, so that the code can be better reused, easier to be replaced, better embrace the change of requirements.

1) Design simplification principle

【Keep It Simple】” Simple works better than complex. “This is the basic idea of the Simple design pattern. A complex architecture is difficult to test, maintain, and later development iterations, which also leads to increased communication costs. The architecture should be as simple as possible, and the simplest solution should be used to solve the problem.

There are problems with both structural and logical complexity. The simpler the architecture, the better the stability, which is also the consensus of the industry. Simple architecture is not the same as simple implementation, simple architecture requires a lot of effort by the designer, but also requires the designer to have deep technical attainments. Simple architectural design helps speed up the development team’s understanding of the architecture. Simplicity means that the problem will not be very complex. Architecture is the key to solving the demand. No matter how complex and changeable the demand is, it can always find the simple and stable part, take the stable part as the foundation, and then improve and expand according to the need to solve the complex problem. Secondly, simplicity is also reflected in the simplicity of presentation. Also reflected in the system level, model, static and static, flow, link, read and write, time (asynchronous) on different dimensions, granularity of abstraction, design simplification.

“Resilient design” is often a complex design behind meeting changing requirements. The direction to solve the problem should be to simplify problem, but in the concrete implementation of the design process, a lot of people could be simple and complex problems, the application of design patterns are easily make the mistake, as far as possible do simple in design, the implementation of the implementation to each class/module to truly reflects on the system of the nature of things, only one essential feature, The closer the code is to it, the simpler the presentation design is, and the more straightforward the system is, the more stable, reliable, and trusted it is.

2) High cohesion and low coupling

High cohesion and low coupling are concepts in software engineering, which are the criteria to judge the quality of design and the main goal of architecture design. It has better reusability, maintainability and expansibility, and software design principle is the guideline for its implementation. In design, coupling degree and cohesion degree are commonly used as standards to measure the degree of module independence. One criterion of module division is high cohesion and low coupling. From a modular point of view, high cohesion means that each member method of a class does as little as possible to do one thing, and low coupling means that one member method within a class calls another. From a class perspective, it reduces calls to other classes within the class. From the perspective of functional modules, it is to reduce the interaction complexity between modules, that is, horizontal: between classes, between modules, vertical: between levels; As much as possible, content is cohesive and data is coupled.

Reduce dependencies and decouple

Magnostatic reverse dependency

Rely only from the top down, extracting common modules with repetitive functions. A common module must be sufficiently functional that no other business logic can be judged in it. The overall module dependency should be a tree rather than a network.

Virtual gateway Configures decoupling

The dynamic properties of each module, setting configuration items, can use the configuration center for real-time update and take effect.

Virtual gateway Unlocks permissions

The permission control and security verification of the functionality was originally integrated with the business code. In the service grid architecture, permissions and functions are divided into communication sidecar modules, and business and technical sides interact through API interfaces.

Static route Traffic is decoupled

In the service grid architecture, the control of flow descends to the sidecar communication module. Supports traffic splitting and isolation between services.

Buys data decoupling

Ensure that data between modules do not affect each other, and decouple hot and cold data in the same module. A large amount of data is accumulated after the system runs for a long time. To ensure stable system performance, reduce performance degradation caused by a large amount of data.

Fault Expansion decouples users

Good architectural design requires the ability to scale horizontally, improving system performance by simply adding hardware.

Virtual gateway Decouples deployment

Support fast trial and error, gray release. For the same module, deploy and upgrade several servers to the new version first. After the restart, the current deployment can be verified after the traffic is cut in. If there is no problem, continue to deploy other nodes.

Static decoupling

When the same module has very high concurrency at the moment, pure traffic decoupling is not enough. The real key processing functions behind the front-end traffic impact need finer traffic decoupling to achieve static and dynamic resource access separation.

Reliable system is highly modular (no not expose interfaces), minimal invasion (normal use without strong coupling, such as inheritance relationship), easily integrated into different types of code libraries (such as, as far as possible use portable code rather than direct call platform API), there are very few assume to external environment, to the implementation details of the decoupling with other systems.

3) Interface design principles

An interface, you can think of as a contract, a contract. The modules of the system work together because they interact through defined apis. While providing abstract apis externally, you may also need to use other modules’ apis as the basis for your own operations. Interface design should have a single responsibility, hide internal implementation as far as possible, avoid behavior diffusion through inheritance, unify naming and style, enhance understandability, define a good version and so on to ensure interface stability and compatibility, the idea of interface verification is to ensure the testability of the interface.

A) Encapsulation principle

A good interface design hides the details of the implementation and presents only the interfaces that are needed to related parties, while the implementation is transparent to the outside world.

B) Minimum responsibilities

A class implementation should be as compact as possible, dealing only with closely related functions, and a method should only do one thing, requiring the designer to discover the different responsibilities of the class and separate them. A microservice, for example, should be as single in responsibility as possible and provide as single an interface as possible.

C) Minimum interface

Methods exposed to user use should be kept to a minimum. Published methods may be frequently used by customers, and design problems or improvements may affect existing methods, so these impacts need to be minimized. In addition, some lighter common methods should be combined into a single method to reduce the coupling between the user and the system, which can be implemented through either the facade pattern or the delegation pattern.

D) Minimum coupling

Classes should be designed to interact with other classes as little as possible, and if a class is found to be coupled to a large number of classes, new classes can be introduced to weaken the coupling. In design patterns, mediation patterns and appearance patterns are both examples of this type of application.

E) The principle of stratification

The improvement of encapsulation principles. A system, often has a variety of responsibilities, such as responsible for dealing with DB code, also have to deal with the user code, these codes according to the function divided into different levels, can achieve a large package of different parts of the software architecture.

Reference blog: [SOLID Principles fine interpretation of interface isolation principles ISP]

4) The least knowledge principle

A guiding principle for object-oriented programming that describes a strategy for keeping code loosely coupled. Each unit has only limited knowledge of other units and only knows the units closely related to the current unit; There are two main benefits to software design: better information hiding and less information reloading. For example, a man can order a dog to walk, but the dog should not direct its legs to walk, but the dog should direct its legs to walk. The application of this principle is beneficial to reduce the coupling between modules and improve the maintainability and reusability of software, but it may lead to the design of many packaging methods for transfer in the class and increase the complexity of class design.

5) Facade control mode

Assign responsibility to a representation class of a system, device, or subsystem (the facade controller), or to a representation class of a use case that receives events and coordinates the overall system. In the case of interaction between two or more objects, in order to avoid direct coupling and improve reuse, an intermediate class is created and assigned responsibility, and the interaction of objects is coordinated by the intermediate class. Systems are similar.

Appearance model, can be the caller and complex business class segregation, the caller does not need to know the complicated relationship between business class to make the business process, which greatly reduces the coupling between the caller and business class, suitable for use in internal relations more complex component, also suitable for use in the business layer interface to the presentation layer. Data transfer is realized through parameters.

6) Information expert mode

The general reference principle for assigning responsibilities to subsystems/modules is to assign responsibilities to experts who have enough information to perform their responsibilities. In a figurative sense, data, events, status, etc., are transferred to a single responsibility module responsible for the data processing for processing, which can be used as the basis for dividing responsibility boundaries. The ability to properly assign responsibilities, that is, what responsibilities each class/component/subsystem should have, how to keep responsibilities single, and how they work together, requires clear responsibility boundaries. For example, when there is uncertainty about which team should be responsible for a microservice, the general principle is also who owns the data and who is responsible, building a microservice based on a bounded context (typically a well-defined domain data source).

7) Consistency of data

In a distributed architecture, data consistency, availability, and partition tolerance are guaranteed. However, two of the three items can be satisfied at most at the same time. The idea of BASE theory is that even if strong consistency of data cannot be achieved, the final consistency can be achieved in an appropriate way.

A soft state is one that allows an intermediate state to exist without affecting the overall availability of the system. Generally, distributed storage stores at least three copies of a single piece of data. The delay of synchronization between different nodes is a manifestation of the soft state. After a period of time, all data copies in the system can finally reach a consistent state and become final consistency. Weak consistency is the opposite of strong consistency, and final consistency is a special case of weak consistency.

State management may be front and center in microservices architectures. Providing connectivity and integration means that the system is essentially either querying state or changing state (or both). There are usually not unique ways to query and change state for a given entity or information. To avoid data corruption or unexpected results, each microservice component can achieve a higher level of physical autonomy by explicitly declaring state and using a strategy to deal with the side effects of changing and querying state, allowing for faster changes.

8) Principle of stability and reliability

Stability and reliability are the probability that the system can run the program successfully according to the design requirements at a given time interval and under given environment conditions. Successful operation not only ensures that the system can run correctly and meet functional requirements, but also requires that the system can recover to normal operation as soon as possible when an unexpected fault occurs and data is not damaged. N+1 redundancy (center, server, system, middleware, service, and data) must be considered in architecture design and deployment to avoid single point of failure and downtime. Service-oriented system architecture can ensure the reliable and stable operation of the system through service governance, such as flow limiting and degradation. Fault isolation design should be realized and fault propagation and cross influence should be avoided through circuit breaker protection.

In terms of service, the loss of any node in the network topology may lead to service unavailability. For example, the edge system can detect nodes with high risk in advance, which can be avoided. From the system perspective, nodes can communicate with each other about status and diagnosis information, which makes it convenient to deploy fault detection, node replacement, and data detection at the system level. Reliability of data Angle, data communication, etc.

9) Scalability

The desire for expansion stems from the variability of requirements and requires an architecture to be extensible to cope with change and adapt to possible future changes. However, there are contradictions between scalability and stability, and between scalability and simplicity. Therefore, it is necessary to balance the input and output ratio to design an architecture with proper malleability, such as seamless upgrade, expansion, feasibility and convenience of expansion. A good system architecture can achieve low latency and high throughput, and the goal of scalability is to achieve maximum throughput with acceptable latency.

The extension includes two aspects: first, the function expansibility, mainly depends on whether the platform framework is designed and reserved enough extension points, so that various functions can be easily added later or various plug-ins can be implemented by a third party. The other is scalability. Elastic capacity of the system, that is, whether elastic capacity can be expanded with the increase of the number of users and concurrency, and stronger processing capacity can be provided by increasing hardware devices. This is generally called scalability. Scalability is a comprehensive consideration and balance of high performance, low cost and maintainability and many other factors. Scalability pays attention to smooth and linear performance improvement, focuses more on the horizontal scaling of the system, and realizes distributed computing through cheap servers.

10) Architecture reuse principles

Reuse is to avoid duplication of effort and reduce costs. In the process of architecture design, some common parts can be abstracted to form common classes and interfaces, and related functions required by other functional modules can be called to achieve reuse purposes. In reality, a lot of work has been done on architecture reuse, such as frameworks. Using rules for functional decomposition helps increase reuse, as each class and method is more precise.

In microservices architecture, each service must implement a number of infrastructure-related functions, including observability and service discovery patterns, as well as an externalized configuration pattern to provide configuration parameters such as data storage credentials to the service at run time. When developing new services, a better approach is to address these issues by applying the microservice base pattern and building services on top of an existing mature base framework.

11) Design the system runtime

Today’s systems tend to be complex and large-scale, so the architecture design should enable the system to predict system failures and prevent them before they happen. Therefore, it is convenient to control system operation, monitor system status and deal with errors effectively by planning system operation resources with reasonable architecture. To achieve the above goals, the communication between modules should be as simple as possible, and reasonable and detailed system running logs should be established. The running status of the system can be known and effective exception handling can be carried out through automatic audit of the running logs.

Focus on control flow, communication mechanisms, resource contention, lock/token-ring mechanisms, synchronous asynchrony, concurrent/serial, as well as quality attributes.

Consider problems based on the quality requirements of the system at runtime, focusing on the non-functional requirements of the system. For example, customers often require the maximum response time of the system’s function screen to be less than 4 seconds, which can meet the needs of 2000 users online at the same time, and role-based system resource security control.

12) Monitoring and observability

An important part of operation and maintenance is to understand the behavior of the application at run time and to be able to diagnose and troubleshoot faults such as incorrect requests or high latency. Designing observability services:

 ** Health check API: ** API to return the service health status.

 Log aggregation: Write the generated logs to the centralized log server, provide log search, and trigger alarms based on the log situation.

 ** Distributed Tracing: ** Assigns a unique ID to each external request to track requests across services.

 ** Exception Tracking: ** sends exceptions to the exception tracking service, sends alarms to the developers and tracks the resolution of each exception.

 ** Application Indicators: ** Export maintenance indicators, such as counters, to the indicator server.

 ** Audit logs: ** Records user behaviors.

13) System security design

With the deepening and expansion of applications, more and more scenarios and information are involved, and a large number of confidential information is transmitted on the network. Therefore, the consideration of system security has become the key to system design, which needs to be considered from all aspects and angles to ensure the absolute security of data. For example, in Web applications, in the face of various security risks (SQL injection,XSS, CSR attacks, etc.), whether various vulnerabilities can be blocked, whether the architecture can limit the flow, prevent DDOS attacks, etc. In micro service architecture system, the user authentication work is usually performed by API gateway/data plane, service the caller must be related to the user’s information, such as the identity and role) passed to it call service, the common solution is to access token mode, the access token (e.g., JWT) is passed to the service, the token authentication and access to information about the user.

Reference blog: [Cloud Native Zero Trust Network – Application service security]

4. Design views and architecture

Architecture can be viewed from multiple perspectives, just like architectural architecture, generally including structural, plumbing, electrical, etc. The 4+1 view is an excellent way to describe an application architecture, with four different software architecture views, each describing a specific important aspect of the architecture and including specific elements and their relationships to each other.

 ** Logical View: ** In an object-oriented language, these elements are classes and packages. The relationships between them are those between classes and packages, including inheritance, associations, and dependencies.

 ** Implementation view: ** Build the output of the build system, which consists of modules and components that represent packaged code. The relationships between them include dependencies between modules and composition relationships between components and modules.

 ** Process view: ** Components at runtime. Each element is a process, and the relationship between processes represents the communication between processes.

 ** Deployment View: How do ** processes map to machines? The elements in the view are made up of machines (physical or virtual) and container processes, and the relationship between machines represents the network. This view also describes the relationship between the process and the machine.

The +1 in the view refers to the scene, which is responsible for tying the elements of the view together. Each scenario is responsible for describing how multiple architectural elements in a view work together to complete a request. For example, a scenario in the logical view shows how classes collaborate, and a scenario in the process view shows how processes collaborate.

Physical architecture of the system

Hardware selection and topology, software-to-hardware mapping, and interaction between hardware and software are considered.

5. Continuous delivery & deployment

Achieving continuous delivery and continuous deployment of projects/systems is a key part of DevOps. Continuous delivery is the ability to deliver all types of changes (functionality, configuration, bug fixes, experimentation, etc.) safely and quickly to the production environment or users in a sustainable manner. The key feature is that software is always ready to deliver, and it relies on a high level of automation, including automated testing. Continuous deployment takes continuous delivery to a new level, with high-performing organizations deploying to production multiple times a day with far fewer production outages and able to recover quickly from anything that happens.

The microservices architecture naturally supports continuous delivery and continuous deployment.

The goal of continuous delivery and deployment is to deliver software quickly and reliably, and the four useful metrics evaluated are as follows:

 Deployment Frequency: Frequency of software deployment in the production environment.

 Delivery time: The time between the developer submitting the change and the time the change is deployed.

 Average Recovery time: the recovery time from production environment problems.

 Change Failure Rate: Percentage of changes submitted that cause production environment problems.

Systems need to build small, release small, trial-and-error fast, iterate and deliver. In traditional organizations, deployment frequency is low and delivery times are long, especially as development and operations personnel often wait until the last minute during maintenance Windows. By contrast, DevOps organizations release frequently, often multiple times a day, with far fewer production environment issues. For example, Amazon can deploy code changes to production in seconds, and Netflix can deliver a software component in minutes.

Continuous delivery and deployment reduces time-to-market and enables companies to respond quickly to customer feedback and provide the reliable service they expect. Developers can spend more time providing valuable features, rather than going around fighting fires.

Flexible design of software architecture

There are two common forms of loosely-coupled software component modules: interfaces and messages, which, like human bone joints, connect the loosely-coupled sides. Messages are more loosely coupled than interfaces because interface methods can change, causing users of the interface to change with them, whereas with messages, producers can simply send messages to the message system, regardless of who receives and consumes the message, thus completely decoupled from consumers. Design Examples:

complexity

Suppose there were only two components interacting:

! [](https://p1-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/05eb1869812945fe806ca4de0eeb7efa~tplv-k3u1fbpfcp-zoom-1.image)

As requirements scale and change, adding a third component increases the complexity of any interaction between these three components:

The exponential growth in complexity is staggering, and when we get to six components, the complexity is staggering.

How does nature cope with this complexity? It’s called the law of construction in physics. The laws of construction, which seek to describe the flow of energy and matter through physical networks (such as rivers) and biological networks (such as blood vessels), theorize that if a fluid system is to continue to exist (i.e., to survive), it must always provide easier access to the fluids in the system. In other words, the system should aim to minimize energy consumption while maximizing the entropy produced by consuming each unit of energy. Evolution is essentially the process by which organisms constantly rearrange themselves so that energy and matter can move through them as quickly and efficiently as possible. Better fluid structures, whether they be animals or rivers, will replace those with poorer structures.

How do you design a structure so that user requests that flow through it are more efficiently responded to? It is obvious that the arbitrary association of multiple components is uneconomical, and the side effects of entropy are large. If the structure is changed to the following, components can be grouped, with three elements as a group and another group as four elements, and the groups are related by a representative element:

The single responsibility function is key, meaning you only do one thing. It is consistent with the principle of making things DRY: every piece of knowledge must have a single, unambiguous, explicit authority within a system. Don’t be repetitive, be crisp. Every important function in the program should be implemented in one place in the source code, with business rules, long expressions, if statements, mathematical formulas, and metadata all in one place. Single responsibility is also related to the delegate principle, where you have to give something up to get something back, and give it up by delegating to other classes: don’t do everything yourself, delegate to the corresponding class. Because each component has a single responsibility, a unique relationship can occur between components.

High cohesion means reduced module complexity. As you become organized, complexity naturally decreases. The strong relationship between components is expressed by coupling, the good strong relationship is high cohesion, and the bad strong relationship is low coupling. High cohesion and low coupling are important principles in reducing complexity and improving flexibility.

In addition to static structural relations, high cohesion and low coupling are also expressed in the method behavior of objects, which needs to be realized by assigning responsibilities. What is duty? It includes three aspects:

  1. The actions that the object should perform;

  2. Object contains knowledge such as algorithm constraint specification description;

  3. The main factors that the current object affects other objects.

Take the newsboy collecting money from the buyer as an example: The newsboy should charge the customer two dollars to buy the newspaper. Does he directly take the customer’s wallet and take two dollars out of it, or does he ask the customer to take two dollars out of his wallet? Undoubtedly the latter.

What’s the difference between the two? Tell, Don’t Ask:

Newsboys simply tell customers what to do without getting directly involved in how to do it (Do you have enough money? Enough to buy). The newsboy gives the customer an order and does not pay attention to how the customer carries out the order. The Tell, Don’t Ask principle allows us to ensure that there is no overly detailed coupling between the actions of the two components, and that it can be done by a message, by sending a message telling the other person what I need, rather than by frisking the other person. At this point, the maximum loose coupling between the two components is achieved, enabling the greatest possible flexibility in architectural design.

Complexity of software architecture development inevitable as anything in nature, as the clever design of the software architect, how to learn the nature, like skilled and magical craftsmanship segmentation complexity for single responsibility, and from the relationship between structure and behavior of high condensation are combined or messaging, so as to achieve the flexibility of high cohesion and low coupling.

Iv. Comparative analysis of typical architectures

The system architecture continues to upgrade and improve with the development of technology, evolving from the traditional single architecture to distributed, microservice architecture and Serverless architecture. The following are four major software architectures and their advantages and disadvantages.

1. Single application architecture

Singleton applications are easy to develop, easy to change, easy to test, easy to deploy, and easy to scale horizontally. But with the demand not increasing, the management cost is also increasing, the code base expands rapidly. Individual applications become bloated, complex, and unreliable, with reduced maintainability and flexibility and higher maintenance costs. Monolithic programs fall into monolithic hell, development becomes slow and painful, and agile development and deployment are impossible. After the changes are applied, the development team commits their changes to a single source code repository, and the path from code submission to production is long and arduous.

Problem:

** Excessively complex: ** Taking a single application at the level of a million lines for example, the project consists of a lot of modules, blurred boundaries, unclear dependencies, uneven code quality and chaotic stacking, which makes the whole project incredibly complex. Each change makes the code base more complex and difficult to understand. Step by step, the app becomes a giant, inexplicable ball of dirt.

** Slow development: **IDE tools slow down, it takes a long time to build an app, and apps that are so large take a long time to start each time. The cycle from edit to build to run to test is taking longer and longer, seriously affecting team productivity.

Buy a ticket to deploy slowly: ** More code increases the time it takes to build and deploy. Each function change or defect repair causes redeployment. Full deployment takes a long time, has a wide range of impacts, and has high risks. As a result, applications are deployed online with a low frequency and a high error rate.

** has limited scalability: a single application can only be extended as a whole, instead of scaling according to the needs of business modules.

Buy a ticket ** Poor reliability: ** Programs are bulky and can’t be fully and thoroughly tested, meaning bugs in code are more likely to enter production. All modules run in the same process without fault isolation, and errors in one module cause all instances to crash.

** Barriers to technological innovation: ** Teams have to use the same technology stack for long periods of time, making it extremely difficult to adopt new frameworks and programming languages. Adopting or trying new technologies is extremely expensive and risky because applications have to be completely rewritten.

2. Distributed architecture

With the development of services, more and more product functions are required and the logic of service modules becomes more complex. As a result, single applications become bloated, and their maintainability and flexibility decrease. As a result, the development cycle becomes longer and the maintenance cost becomes higher. In this case, the system needs to be divided into a distributed system based on service function modules. The service modules are deployed on different servers. The data between the modules is exchanged through interfaces, and the load balancing structure is used to improve the system load capacity.

Architecture Features

 Reduce coupling: Split the modules and use interfaces to communicate, reducing the coupling degree between modules.

 Clear accountability: Divide the project into subprojects, with different teams working on different subprojects.

 Convenient expansion: To add functions, you only need to add a subproject and call the interfaces of other systems.

 Convenient deployment: You can flexibly implement distributed deployment.

 Improved code reuse: Use distributed services to build the common service layer and reduce development.

Disadvantages: the interaction between systems using remote communication, interface development workload increases.

3. Microservice architecture

Reference blog: [Micro service architecture and ServiceMesh technical framework introduction]

Consisting of loosely coupled and bounded context elements, it is an architectural style for decomposing application functionality into a set of services. Each service is made up of a focused, cohesive set of functional responsibilities, using services as modular units. The API of a service creates an impassable boundary for itself and cannot be crossed to access the internal classes of the service. It provides higher maintainability, testability and deployability for applications, and improves development efficiency and application scalability. The nature of services is organized around business rather than technical issues, reflecting business semantics, self-contained, stateless, and across boundaries.

Define the microservice architecture

Use more abstract concepts of system operations to refine application requirements into various key requests. System operations are architectural scenarios that describe how services collaborate. Then determine how to decompose the services. There are several strategies, one is to define the services corresponding to the business capabilities, and the other is to decompose and design the services around the subdomains of domain-driven design, each with its own domain model. Policies decompose and design services around business rather than technical concepts, and the result is the same: an architecture of several services, centered on business rather than technical concepts; Finally, determine the API for each service.

Each identified system operation is assigned to a service, which may implement the operation independently or may need to collaborate with other services. The decomposition of services takes into account network latency, self-inclusion, data consistency across system boundaries, and so on, using concepts from domain-driven design to eliminate so-called God classes. The behavior of each system operation is described in the way of domain model, and each important system operation corresponds to a major scenario at the architecture level, which needs detailed description and special consideration in the architecture.

Perform microservice unbundling

Domain-driven design is a methodology for building complex software, usually with object-oriented and domain-driven models at its core. Domain models contain knowledge within a domain in a way that solves specific problems. It defines the vocabulary of the current domain-related teams, and DDD is also known as a common language. The domain model is closely mapped to the design and implementation of the application. At the design level of microservices architecture, DDD has two particularly important concepts, subdomains and bounded contexts.

Traditional enterprise architecture modeling approaches tend to create a single model for the entire enterprise, with definitions of business entities, such as customers or orders, that apply globally to the entire application. This type of modeling brings problems such as consistency, complexity, and confusion among different teams. DDD takes a completely different approach to avoid this by defining multiple domain models, each with a clear scope, and the domain driver defines a separate domain model for each subdomain. A subdomain is a part of a domain, a term used to describe an application problem domain that is identified in the same way as a business capability. DDD refers to the boundary of a domain model as a bounded context, which includes the set of code that implements the model. In the microservices architecture, each bounded context corresponds to one or a set of services. Each subdomain has its own domain model.

Microservices architecture benefits

1) Sustainable delivery and deployment of large applications

Continuous delivery and continuous deployment are part of DevOps, which is a set of fast, frequent, and reliable software delivery practices, and efficient DevOps organizations typically face fewer problems and glitches when deploying software to production. The microservice architecture enables continuous delivery and deployment in three ways:

Have the testability required for CI and CD: Automated testing is an important part of continuous delivery and deployment. Each service is relatively small, making it easier to write and execute automated tests, and the application is less buggy.

Have the deployability required for CI and CD: Each service can be deployed independently of the others. If the person responsible for the service needs to deploy changes to that service, it can do so without coordination with other people. As a result, it is much easier to deploy changes frequently into production.

Achieve team autonomy (autonomous and loosely coupled): Build the organization as a collection of small teams. Each team is responsible for the development and deployment of one or more services, and can develop, deploy, and extend their services independently of all other teams, improving development efficiency.

2) Service is small and easy to maintain

Each service is smaller and easier for developers to understand, a smaller code base makes developers more productive, and the entire application is built out of several microservices, so it is kept in a manageable state. Fast-start services improve efficiency and speed up the r&d process.

3) The service can be independently extended

Supports both X-axis extended instance cloning and Z-axis extended traffic partitioning. Each service can be deployed in a suitable environment.

4) Better fault tolerance

Better fault isolation is achieved. If a memory leak in one service does not affect the other services, the other services can still respond to requests normally.

5) Technology stack is not limited

The technology stack can be reasonably selected according to the characteristics of project business and team. This is completely different from a single architecture, in which the selection of technologies will severely limit the later attempts of new technologies.

Disadvantages of microservices architecture

 Service splitting and definition Challenges: * Service splitting and definition is more of an art form. Deviating from the service separation of the system can lead to a distributed monolithic application: a so-called distributed system that contains a bunch of tightly coupled services that must be deployed together. Will combine the drawbacks of both singleton and microservice architectures.

 Inherent complexity of distributed services: System fault tolerance, network latency, data consistency, and distributed transactions all pose great challenges. Services must use interprocess communication. In addition, services must be designed to handle various situations where local and remote services are unavailable/high latency.

 High O&M requirements: In a single architecture, you only need to ensure the normal running of one application. In microservices, you need to ensure the normal running and cooperation of dozens or even hundreds of services, which brings o&M challenges. Successful deployment of microservices requires a highly automated infrastructure.

 Coordinate More Development Teams: You need to carefully coordinate more development teams when deploying functionality that spans multiple services, and you must have a release plan that ranks services by dependency. This is very different from batch deployment of multiple components in a monolithic architecture.

 High interface adjustment cost: Services communicate through interfaces. If a microservice is modified, all microservices that use the interface need to be modified.

 Repetitive Work: Many services may use the same functionality, but the functionality is not decomposed into a microservice, which may lead to repeated development of the functionality. You can use shared libraries to solve the problem, but you have to deal with the multilingual environment.

Microservices architecture has become an important cornerstone of any enterprise business that relies on software technology, a double-edged sword with both benefits and drawbacks. Some issues are unavoidable when using microservices architectures, and there are multiple possible solutions to each problem, along with various trade-offs and trade-offs, and no one solution is perfect.

4. Serverless architecture

Low operating cost, simplify equipment operation and maintenance, improve maintainability, faster development speed, etc.

Reference blog: [Trusted second half of cloud computing – Serverless】

Credible discussion on servitization framework

Microservices architecture is an architectural pattern that advocates the partitioning of a single application into a set of small services that coordinate with each other to provide ultimate value to users. Each service runs in its own separate process and uses a lightweight communication mechanism to communicate with each other. Each service is built around a specific business, and architecturally provides applications with better maintainability, testability, and deployability, as well as rapid iteration and delivery capabilities.

Note: The understanding of this chapter requires some basic knowledge of microservitization architecture.

Reference blog: Micro Service Architecture and ServiceMesh Technical Framework introduction

Ms.huawei.com/km/blogs/details/8396531?l=zh-cn < a href = “3” > 【 ServiceMesh – Istio flow rate limit introduction 】 【 cloud native – application service under zero trusted network security 】

1. Registry and configuration center

Many people often confuse registry and configuration center, there is this point of view that service registration data is actually a kind of configuration, so it is not unreasonable to explain that registry data is a kind of configuration. But registries exist independently because the data in the registry has a degree of business independence to describe microservice relevance. The registry is completely independent of the configuration center and is a stand-alone, highly available, data-consistent system. Ideologically, the design should be clear about the purpose of the registry and configuration center, distinguishing between service registration/discovery, or configuration registration and update, notification.

Register/unregister: Save information about service providers and service callers;

Subscribe/unsubscribe: The service caller subscribes to the information of the service provider, supporting real-time push.

2. Decouple services from technologies

The more interaction between modules, the stronger their coupling, and the worse their independence. Coupling is a measure of the degree of association between modules, and refers to the dependency between modules, including control relationship, call relationship, and data transfer relationship. The strength of the coupling depends on the complexity of the module’s indirection, how it is called, and how much data is sent through the interface.

It is necessary to separate the business and service governance middleware capabilities, otherwise heterogeneous services will inevitably lead to the disstandardization of service governance capabilities. The decoupling of application and technology can bring the language independence of application implementation (Java/Go/Python/C++, etc.), the ignorance of application upgrade and maintenance, and the expansibility and stability of modules.

Virtual gateway Separates application and governance, allowing service applications and governance capabilities to be physically disconnected. This is done by deploying local agents.

Lent stresses the separation of execution and control, where the control plane and the data plane are shelled. Implement the zero/less intrusive principle for business processes and treat service governance capabilities as part of the protocol stack.

Practice guidelines:

The relationship between the business application side and the modules within the service system platform should remain loosely coupled.

Avoid coupling between the service application side and the data plane communication module, and the data plane communication module should be responsible for the communication-related functions. At the level of service interaction, keep the light form of business application and thoroughly implement the information expert mode.

Avoid separating the service interaction policy processing mode from the registry on the service application side.

Avoid amplification of energy consumption for in-process status event monitoring and propagation on the business application side.

The agility of business application system promotes the independence of development language, technical implementation and interface compatibility.

3. Servitized data model

The key data model of the system and the life cycle of the data object should be clearly defined at the beginning of design. Data model is an important part of system architecture design. The definition and use of model object can reflect the advantages and disadvantages of system architecture design.

A service object model (OSM) automatically describes a service, including basic service information, environment information, service capabilities, and Qos. A service object represents a service as a whole and is as stateless and self-contained as possible. A datastore (memory /SSD disk) association can be established for stateful services.

According to the principle of business correlation, the service object model can also be published as a class of services for basic technology capabilities, keeping the data object model consistent. The differences between the two are in object attributes and operation characteristics.

According to the principle of principle, the service object model is atomically divided into multiple subobjects which are used in different scenarios in the system. For example, the model is divided into basic service objects, service interaction objects, business domain objects and control policy objects. It will inevitably bring complexity and stability problems to the whole distributed system. It also increases the coupling degree between modules.

4. Service registration and deregistration

The nature of a service reflects business semantics, can include multiple methods, and is self-contained. For example, order service includes query order, new order, etc. Each service instance needs to be registered and unregistered with the registry, which acts primarily as a coordinator to discover services that are registered and available for use.

Registering a service to buy a ticket should be minimalist, meaning that the service instance just registers and stays online. Interfaces that need to provide services externally need to be registered. Interfaces that need to interact with processes do not need to be published externally.

A business application registers a service in the system into a registry, which represents the start of the service life cycle and declares that the microservice interface on the business application side is available but not reachable. In sidecar mode, service discovery needs to be performed.

Users of a service automatically declare what they want to buy, and either define or default policies for how they use the service. For the service provider, the basic information and capability model (protocol, concurrency, codec, information size, etc.) of the service needs to be defined. The capability of the consumer is limited by the capability output of the service provider.

According to the way business domain services and services interact, it is clarified which services need to be registered and whether they reflect business semantics. Registries information in registries must be critical resources to discover. If it is a technical registry from the business application side, associations should be constructed from the object model form as properties of the service (static, dynamic).

A registry is a key core component, guaranteeing stability, reliability and high performance of the registry requires not only its own optimization, but also the best way for related parties to use the system to interact and transfer information in the most reasonable way.

Buying a business application to interact directly with a registry is best practice, while others sacrifice simplicity, scalability, and stability.

5. Platform discovery mode of services

Service consumers and service providers have dynamic characteristics and need to register in the registry, including their addresses and other environmental information.

In a service discovery mode, the customer side registers the service and spends time on the business application side. In the production cluster environment of distributed system, all business processes will subscribe to the Watch of the registry to obtain the information push of the service related party of the registry. The defect is that there are too many subscription/push points, and the registry also needs to generate certain resource consumption for this. The more associated points, the more consumption.

In a communication sidecar mode, service interaction and routing are processed by the data plane, while the control plane adaption layer provides service discovery, allowing the server and client to avoid the responsibility of service discovery. Compared with the client-side discovery mode, the amount of subscription and information push is only 1/N of the original, which greatly and effectively reduces the pressure on the registry. (The service object model should be kept unique, avoiding the separation of registration information and communication model object). At the same time, the business application side and other modules are further decoupled.

To buy a basic service that needs to be registered from a registry, such as storage, a combination of queries and guarantees reduces unnecessary subscriptions to the registry and event pushing.

6. Responsibilities of the control surface adaptation layer

The responsibility of the control surface adaptation layer is to connect with the communication sidecar module and realize the management of control instruction delivery, routing, load, flow control, fusing, security and other policies related to service governance.

The modules that interact with the data control surface adapter include a registry, a communication sidecar module process in each POD, a status monitoring module, and a control (control surface) module. Interact with the registry for service discovery and implementation of operations related to the service lifecycle; Obtain control policy and security policy from management control surface; Obtain the event status of the service application side from the status monitoring module and link with the service side car module.

The control surface adaptor layer module and the business application side program should have no substantial direct interaction and should be decoupled. Coupling increases the complexity of the system. Decoupling can be enforced or delegated through API interfaces.

To subscribe global service information and implement centralized control, it is necessary to ensure the reliability and stability of the control surface adaptation layer without single point of problem.

Ensure that policy and control information is delivered reliably.

7. Service invocation and data communication

Business applications register business domain services, including basic service information, instance environment, and service capabilities (Qos), with the registry, that is, start the service life cycle. The communication mode carries the interaction between services, and a good architecture should minimize the interaction between data communication modules.

A service automatically announces a declarative API for its microservice component after registering. In the sidecar architecture, service invocation requires routing of information provided by the data surface, and the request is submitted to the service provider for processing.

 service interaction is the call to an interface, the instance name information, category (the requester | providers, etc.), environmental information used in addressing across processes, and the interface method name is used to process the space inside the callback, the name of the understandability, identify better than digital identity. Simple principle, can only rely on static information never use dynamic characteristics, reduce the variability of the system dependence, coupling, especially distributed system scene, to reduce subsystem subsystem, module and module coupling dependence.

A communication link is designed to minimise energy consumption, allowing links to be built precisely in the right scenario. Avoid link intensive interaction, lots of file handles, and replace those with poor structures. Improves the overall stability of the system, and reduces the overall resource consumption of the distributed system, including computing, memory and garbage collection anomalies caused by excessive consumption.

The data communication module, which ensures high cohesion and low coupling, mainly participates in service governance, such as service information routing and load balancing scheduling. Ensure that responsibilities are single and decoupled from the business application side as much as possible. If the business application side has routing control, it will lead to decentralized communication control responsibilities, bring complexity to the system, and at the same time, it is detrimental to the horizontal and vertical expansion of the system, the agility of business implementation, and the efficient delivery. Also avoid introducing strong associations with registries and status monitoring modules.

External systems buy a ticket either by getting the underlying information about a service in a service registry, or by interacting with a contractual address. Routes on the data plane can be routed to service applications to implement security, audit, load, and flow control in a centralized manner.

According to the scenario, the communication mode provides synchronous, asynchronous, one-way and two-way requests, etc. The best adaptation of HTTP, gRPC, REST, TCP/IP can be considered in terms of protocols, and the protocols are pluggable.

To simplify external operations, the communication types, modes, and modes are consistent, especially for business applications. Improve the transparency of external interface operations and encapsulate processing details inside the interface.

The functions of service governance, such as link availability detection, isolation and fusing, are limited to the sidecar module on the data plane as much as possible.

.

8. Service governance and policies

Service governance is a very big topic, really want to spread out to tell, the length of a few articles are not over. Here’s a quick look at the problems that service governance addresses. Microservitization decomposes and reduces complexity by dividing complex systems into several microservices, making these microservices easier to use and maintain. Micro service connections, service registration | found, routing, fusing, isolation, load balancing, service service current limiting, demotion, access control, authentication, the authentication), monitor (logs, link tracking, early warning, etc.), AB test, canary release, etc., these is the content of service governance.

The point here is that every aspect of service governance needs to be fully designed. Need a clear definition of governance objectives, data model, the governance policies issued, governance to implement the best control point (make sure duty single) and unit module, the best control flow, event flow, the critical path of governance (to reduce the coupling), cyclical activities during the governance, the boundary of governance, governance to minimize the intrusion of the various modules, management of resource consumption, Overall control of governance, transparency of interface use, and so on. Governance is the governance of services and is integrated with the service object model as far as possible from the data model.

Service flow control: Flow control capabilities must be available but used sparingly. Flow control should be reflected on the service surface (reflecting the capability limitation of the service provider to some extent), and the policy model should be attached to the service domain model. The delivery of policies (token, rate, concurrency, etc.) and control instructions should be pushed through the data control surface adaptation layer. The best centralized control point of flow control is the communication sidecar module on the data plane (can sense the pressure of the provider – active/passive, and can transmit back pressure to the service requester); Reduce the coupling on the service application side, and conduct it to the service application through the interface form and linkage control. It can support the flow control means on the direct business application side, but on the premise of decoupling from the basic modules in the platform.

According to the scheme, problems in the system can be found, warned and dealt with in a timely manner by joining buried points and data monitoring.

Link detection and tracing on the front of a queue are implemented end-to-end, decoupling the application side from the communication data plane.

Queue log processing uses Node as the convergence point (Pod, Container, and process) to merge and merge in a unified way for real-time analysis.

According to the system, the system automatically monitors and collects indicators, including health checks, in a multi-point, multi-dimensional and multi-form way.

A system (state, events) monitors something else (storage services) : Unlocks the connection between in-node and out-of-node monitoring points.

.

9. High availability status monitoring

Monitor the status of each subsystem, module and component in the distributed system for abnormal control, early warning linkage, fault isolation and migration recovery, etc.

At the same time, state detection in a single Node is collected and reported by Node and container-level monitoring agents in a centralized way, avoiding the propagation of state information between Node level agents. Based on the structured nature, there is no need to build a full Node level link. In this way, the cohesion of event monitoring state within nodes and loose coupling between nodes are strengthened, and resource consumption is greatly reduced.

The event states monitored by a Node are sent to a cluster Node (N+1) in the platform’s state collection center, which detects abnormal events and alerts users when rules are triggered. For the abnormal event state, as well as the trigger point and surface, it is necessary to comprehensively balance the design and minimize the associated influencing party to minimize the event disturbance. In principle, the module with absolute control responsibility for abnormal events should make the linkage decision at key points. It aims to reduce energy consumption to a minimum while maximizing the entropy produced by consuming a unit of energy.

To differentiate between the source of an event, a Node, a Container, and a process can be used to borrow an event state machine and a real-time detection mode.

To further decouple application and technology, communication routing and information transmission should be focused on the stripped communication subsystem.

 system runtime exception events on and processing, and to plan its operations within the distributed architecture control flow, data flow and its critical path, avoid the normal incident as a normalized processing mode, avoid the full amount of data broadcasting, avoid excessive and redundant system design possible, avoid to bring complexity to the overall architecture, so that reduces the system stability and reliability.

.

10. Service release version and grayscale

With relatively small services, testing, publishing, choreographing, deploying, and upgrading as a unit of service becomes easier and more reliable. Each service can be deployed independently of the others, so it is much easier to deploy changes to production frequently. By ensuring the uniqueness of the domain data model of the service, version-based control reinforces the business-oriented semantics of the service and facilitates its lifecycle management. The data object model involved in service publishing, migration control and so on should be reflected in the service object model.

Iterate and deliver software quickly and reliably, improving delivery agility and failure recovery efficiency. In traditional organizations, deployment frequency is low and delivery time is long. In contrast, DevOps organizations often release software and have far fewer production environment issues.

Gray scale release, service instance migration as much as possible to achieve autonomy, reduce excessive coupling dependence on other modules.

Reduce unnecessary communication and interaction of operation control, target state, and implement the principle of simplicity first.

Reduce the force surface and propagation surface of control flow, policy transmission, information transmission, event notification, etc., and set the best processing point.

As far as possible, the terms of interaction should be differentiated from service-related terms.

11. High and low resource usage

Whether the self-dependent component/module is the same as the integrated part of the system itself, the principle should be reuse priority and resource utilization maximization.

The operation control, event/state trigger, data transmission and interface call within the platform and between modules are simplified.

Provide sufficient resources for key/core modules and subsystems in distributed system to avoid becoming global hot spots.

The low utilization of resources should be designed from the top architecture, eliminating useless modules and components or minimizing resource allocation and maximizing utilization through merging.

The resource consumption of the runtime in distributed system should be planned and modeled, involving control, data, resources and so on.

Avoid subscription, data push, event broadcast, and other modes of full network/region interaction in a distributed cluster environment.

… …

Vi. Architectural design measurement and vision

A good software architecture design is the guarantee of product quality, especially to meet the non-functional requirements often proposed by customers, while a bad architecture is a waste of resources (human and material resources, etc.). Successful software architecture design must follow certain principles and patterns, while failure of software architecture design is always caused by some uncertain factors.

If a software development level in more than 70% of the cases, add a new function also need to involve a large number of documents, a revision of the code, then the software architecture must be very bad, and a good architecture should have completed most of the underlying components development at this time and are independent of each other, joining most of the new functionality is basically the function of the original component combinations (not involving the internal changes), And adding separate components specific to the new functionality.

If there are lots of changes committed every time a feature is added, then the quality of the architecture, you know!

1. Measurement of architectural design

The architecture serves the business. There is no optimal architecture, only the most appropriate architecture. Architecture is always judged by efficiency, stability, security, etc.

Business requirements perspective

 Solves current business needs and problems;

 Efficiently meet business requirements, and solve all current business problems in an elegant and reusable way.

 Forward-looking design is designed to meet the business in an efficient way for some time to come so that the architecture doesn’t change dramatically every time the business evolves.

Non-business requirements perspective

 High availability: Improve software availability as much as possible, step by step, through black and white box testing, unit testing, automation, fault injection testing, test coverage.

 Documentation: Document the entire lifecycle, including but not limited to bugs and requirements.

 Extensibility: Design software with low coupling in mind, and abstract where appropriate. Facilitate feature changes, additions, and iterations of application technology, and support architectural refactoring when necessary;

 High reuse: TO avoid repetitive work and reduce costs, I hope to reuse the previous code and design. This is highly dependent on the architectural environment;

 Security: The data generated in the operation of the organization is of commercial value, and it is an urgent part to ensure data security to avoid scandals like XX gate. Encryption and HTTPS are common methods.

Software architecture should embrace change, be stable, and be easy to maintain.

 Scalability: The service is extensible, and the cost of scaling is reasonable. As the load of services grows, the system can be expanded to meet the demand without degrading the quality of service;

 High availability: Although some hardware and software may fail, the entire system must be available 24 hours a day through redundant software and hardware.

 Manageability: The entire system may be physically large, but it should be easy to manage. You need to develop management tools.

 Cost Effective: The architecture needs TO be DESIGNED with ROI in mind, and the system is economical and affordable to implement. If an architecture is good but costs a fortune, it’s not necessarily the right architecture.

The evaluation reference of system architecture

  1. The system performance

  2. Reliability (fault tolerance/robustness)

  3. availability

  4. security

  5. Modifiability (maintainability, extensibility, restructuring, portability)

  6. functional

  7. interoperability

Conway’s Law in reverse

For large and complex applications, microservice architectures are often the best choice. However, in addition to having the right architecture, successful software development requires some work on the organizational, development, and delivery processes. The following diagram shows the relationship between architecture, process, and organization:

In order to deliver software effectively when using microservices architecture, conway’s law needs to be considered. There is an implicit mapping between organizational architecture and system architecture. The organization that designs a system produces a design equivalent to the communication structure between organizations. So apply Conway’s law in reverse and design your enterprise organization so that its structure corresponds to that of microservices. This ensures that the development team is as loosely coupled as the service.

Several small teams are obviously more efficient than a single large team. The microservices architecture allows teams to achieve a degree of “autonomy.” Each team can develop, deploy, and operate extensions to the services they are responsible for without having to coordinate with other teams. Furthermore, when a service fails or does not meet SLA requirements, it is clear who is responsible. Moreover, the development organization is more scalable and can be extended by adding teams. If individual teams become too large, they are split up and linked to the services they are responsible for. Because teams are loosely coupled, the communication overhead of large teams can be avoided. Therefore, it is also possible to add people without compromising productivity.

3. Vision of architectural design

A system architecture can describe the software as a whole and include all aspects of the software, but each design detail always needs to be considered in isolation, which can lead to inconsistencies between design details and between design details and architecture. The probability and frequency of design conflicts between various parts of architectural design are directly proportional to the size of the team and inversely proportional to the frequency and effect of communication. When a design conflict between different modules causes the software to fail, we need to sit down and take a good look at what’s going on.

Establish an architectural vision that provides a global view of the software, including all the important parts, defining the responsibilities and relationships of the parts, and the principles that the design needs to meet. And the design of that vision comes from requirements, requirements that address the fundamentals of the system. For example, whether the system is characterized as an interactive or distributed system, these requirements will influence the design of the architectural vision. At the same time, the architectural vision should also satisfy other characteristics such as simplicity, extensibility, and abstraction. To put it simply, the architectural vision is regarded as the architectural design of mini. Since the architectural vision is discussed in a single iteration, the architectural vision is also constantly changing as a whole. Because the architectural vision represents the design of the architecture, the evolution of the architectural vision represents the evolution of the architectural design.

An architectural vision is relative to a scope, and it makes sense to talk about an architectural vision within the scope of a particular software function, for example, for the software globally or for a submodule. In this particular scope, once the architectural vision is established, all design principles within this scope cannot be inconsistent with the architectural vision. This is very important and is the greatest use of the architectural vision. With such a guarantee, the consistency and effectiveness of the design can be guaranteed. The addition of any design can be incorporated into the original architecture, making the software more complete, not more dangerous.

The global architectural vision evolves, modifies, and refines as iterations go through the development cycle. An architectural vision at the sub-module level or sub-problem level is essentially the same as the global vision and cannot be violated

In practice, the overall vision is jointly formulated by the design team, while the architectural vision at the sub-module level can be assigned to the design sub-team, and its review is still the participation of the design team. To ensure that there is no conflict or white space between sub-modules, and that each sub-design team can learn from the design experience of others. Generally speaking, when the design team is in agreement, the overall architectural vision work is basically complete.

Coupling problems between submodules (subproblems). In general, the degree of coupling between sub-modules is relatively small, while the degree of coupling between sub-problems is relatively large, such as permission design, accounting and other functions will be used by each module. This means that the interface is formal, cannot be modified, and will be used by other design teams, which will have an unexpected impact on other teams. The formulation and modification of the contract interface shall be approved by the design team. In addition, some global sub-issues in the system are best addressed within the global vision.

Vii. Sense and understanding of the framework system

1. Common pitfalls of architectural design

 Architecture is for architects. Business and developers don’t have to worry about it.

 Make critical decisions too early;

 Try to design architecture in one place. There is no best architecture, only the most appropriate one. Don’t try to design architecture in one place.

 Paying for the Future: At some point, don’t think too much about future expansions. If the business model and application scenario boundaries are clear, future scalability design should be properly considered.

 Omitted key constraints and nonfunctional requirements;

 Lofty heights fall short of reality;

 Immerse yourself in work and lack foresight;

 Technology for technology’s sake: Technology is for business. There is no other point. In technology selection and architecture design, the pursuit of new technologies without reality may lead to more and more difficult architecture. Cost, time, personnel and other aspects should be considered comprehensively;

 You don’t need to design architecture for scalability.

2. The overall direction of the architecture must be correct

Architecture design is one of the key links of software success or failure, which determines the overall quality of software and then determines customer satisfaction. It also determines the scalability, maintainability, stability and performance of the software. The general direction of architecture is conceptual architecture design, and the correct conceptual architecture is designed, indicating that software architecture design has been half successful. The difference in conceptual architecture design between a product and other similar products determines the direction of subsequent development of software architecture. Conceptual architecture design does not pay attention to the specific interface definition and implementation details, but mainly focuses on the overall architecture pattern, technology selection and other aspects. It is the outline planning and overall guidance strategy of software architecture design.

The wrong direction of architecture will lead to instability and complexity of subsequent design and development iterations, as well as lack of stability and expansibility. The basic reconstruction will also lead to excessive and unnecessary consumption of resources, which is also what the project team is unwilling to face.

3. Reference principles for system reconstruction

Reconstruction is based on the discovered problems and potential problems to correct, can also be said to be a debt paying behavior, with a better way and method to correct the problems in the previous code and design, but also to minimize the possibility of product problems, so that the system to reduce the burden of operation. Such as code optimization, elimination of duplication, code violations, module coupling problems, etc. The whole system can be divided into many sub-modules, in order to each sub-module, the final completion of the reconstruction of the whole system, divide and conquer.

Ensure that refactoring only applies to designs that require refactoring, changes in requirements, or improvements to the original design to achieve a good, concise design implementation. A messy piece of code doesn’t need refactoring if it doesn’t need to be modified. Refactoring is only valuable if you need to understand how it works, and if rewriting is easier than refactoring, then no refactoring is necessary. For an architecture, it is almost equivalent to just improving the architecture without changing the external interfaces. In practice, unless you are very experienced, it is very difficult to keep all software interfaces unchanged throughout the whole process of software development.

Refactoring is an excellent way to improve code. Although it is difficult to pursue non-repetitive code, its process can effectively improve the code quality of the development team. Each iteration of the code improves the simple implementation of the system. Promoting the use of refactoring, or even semi-mandatory use of refactoring, among teams helps to share good software design ideas and improve the overall architecture of the software. Refactoring also involves analysis, design patterns, and the application of good practices. Refactoring also requires other good practices, such as code review and test first.

4. The objectives of system reconstruction should be clear

The appeal and goal of reconstruction should be clear, what problems to solve, why these problems occur and how to solve them. Is the solution thorough? We should look at the problem from a higher perspective. The height of the structure and ability determine the thinking, way and method. Is it necessary to do design and development if solving one problem will bring more problems? Local optimization can cause global damage, and any improvement in optimization beyond the bottleneck is just an illusion. If the consumption of resources and human resources is wasted, it appears that everyone’s efforts corrupt the work of the structure and platform, and it is irresponsible for the company and personal life.

5. The necessity of building a team review system

The theoretical basis of team design is group decision making. Compared with individual decision making, the biggest advantage of group decision making is that its conclusion is more complete. Group decision-making needs to pay extra communication costs, low decision-making efficiency, unclear responsibility, etc. But group decision making, if well organized, can play a big role in architectural design.

The design of the system needs to be reviewed by a team review meeting, so that the design modification of one problem will not corrupt the system and lead to further development, delivery, and stability problems. Review is an important means to avoid design mistakes. It can be introduced into the architecture design process. Reviews should focus on the classification of coarse-grained modules/components and their relationships. As depicted in subsequent refactoring and stability patterns, maintaining stability of coarse-grained modules/components facilitates refactoring behavior and contributes to the improvement of architectural models.

6. System reconstruction is a review of the original architecture

Refactoring is a comprehensive consideration of the reasonableness, foresight, and adaptability of the architecture to business agility. Platform architecture, especially distributed system architecture, should be planned and reconstructed from an overall and global perspective. When problems are found in key basic modules and subsystems of the system (scalability, stability, ease of release, resource consumption, etc.), it is necessary to examine whether the design of the original architecture is reasonable and make clear the language (C/C++/Go/Java/…). Characteristics of the architecture hosted above, problems to be avoided. Architectural issues should be addressed at the architectural level sooner rather than later.

7. Software architecture design principles should not be vague

The quality of each code module in the system is very important, and the responsibility boundary, overall operation, cooperation efficiency, stability, expansibility, reliability, fault tolerance of each subsystem in the distributed system are also very important. Software design principles and patterns are best practices, which should be deeply understood, mastered and observed, but not blindly followed. The violation of principles will inevitably have costs, and designers should be aware of this and timely flexible compensation. Specific to the actual work of the business, time, resources and team to weigh. In addition, the design principle of software is proposed for object-oriented design and programming, but it is not only suitable for the internal objects and structures of the system, but also has certain applicability and reference for large-scale system architecture.

8. Subtractive thinking is preferred in system reconstruction

In software system design, development, reconstruction and other engineering activities, can do subtraction must not do addition. When the system has problems, demand changes, etc., mining the possibility of the system itself, reconstruction is not equal to adding modules, increase belongs to the inertial thinking, adding any code modules or other will lead to the emergence and face of new problems. To make the architecture of the system simple, clear, scalable, stable, high cohesion, low coupling, maximum resource utilization, minimum event disturbance,….

To subtraction code, minimize features and remove/note them when in doubt. Many functions may never be used, so just leave an extension interface for them. Combined with system requirements, the effective and reasonable use of some design ideas and patterns can make the program structure more reasonable, the code more clear, eliminate redundancy, reduce the bad taste of the code and so on.

9. Reduce dynamics within the platform architecture

Dynamic things are hard to capture, and systems are dynamic when they run. However, more changes should be avoided for the initial clear usage mode, interface form, basic data, configuration parameters, and interaction mode. For example, the system can declare the uniqueness of a certain information at the beginning of the convention, but must generate a unique ID on the basis of the unique, because ID is dynamically generated, the subsequent generation of strong dependence on ID will inevitably lead to the scalability of the system doubt, bringing strong coupling between modules.

10. Avoid over-design of system architecture

Architectural design often involves over-designing in some way. Over-designing a complex set of designs for changes that never happen is called over-designing. It wastes resources and increases the effort or difficulty of development. The system needs to consider scalability, maintainability and so on, but do not overdesign. Need to stand at the top level of design to determine which design is excessive and avoid.

The stability of a system/platform generally goes through a period of turbulence, which may span several iterations, but eventually flattens out. If the subsequent release and commercial production still fail to achieve the smooth design, it is still necessary to constantly reconstruct to meet the needs, and the failure of the project is doomed to be only a matter of time. Large structural errors will inevitably lead to the complexity of subsequent design and development iterations, as well as the lack of stability and expansibility, and more unreasonable design possibilities.

11. Introduction of system associated middleware

The project introduces middleware, whether external or in-house, with a clear understanding of its functionality. Use within reasonable limits, the principle of minimum dependency, the introduction of key base features, and the introduction and use of advanced features should be weighed and evaluated comprehensively. If the introduced middleware becomes the key support item of the system, it should be used in the optimal scenario, and at the same time, it should avoid and guard against falling into the use mistake (for example, A middleware provides the core function of A, but weakens A and strengthens the function of B). Whether the dependency of middleware itself is repeated with the integrated part of the system itself, reuse is the priority principle.

12. Short-term delivery and evolution direction for refactoring

Short-term architectural refactoring, such as affecting or corrupting the overall architecture, affecting scalability, stability, etc., should be detected and stopped early. Evolutionary iterations of the architecture keep the overall direction of the architecture unchanged. The evolution of architecture needs to be designed, evolved and implemented on the principle of single responsibility, high cohesion, low coupling, simplicity first.

Architecture simplicity is not implementation simplicity

At this point, it would be a mistake to assume that a simple architecture must be easy to design. Simple architecture does not mean simple implementation. Simple architectures require a great deal of effort and technical expertise.

14. The architect’s responsibility does not end with blueprint delivery

The architect gives the blueprints to the builders, who then follow them to create an identical building. However, trying to use this pattern in software development is very deadly. How can the architect know where the “ground” is without going to the front line? How can the design fall safely.

One of the most common mistakes architects make is the separation of design and code. Even if an architecture is considered perfectly during the design phase, problems of one kind or another can occur during the coding phase, making the implementation of the architecture complicated. Or, if there is a bad smell, refactoring techniques can also help identify bad smells. Involve the designer in writing the core code or conducting code reviews to ensure that the coders truly understand the intent of the architectural design.

15….

The core of architectural design is methodology, and simple design is not the same as less effort. It often requires an abstraction of the real world, which is seemingly simple but requires a lot of business and systems knowledge and strong design skills to implement. Therefore, simplicity is one of the goals of programmers.

Pattern is a kind of guidance, help to make a good design scheme, achieve twice the result with half the effort. Patterns are also the cornerstone of object-oriented design, but they often play the role of overdesign. At the beginning of the design, less attention should be paid to the application of the pattern, and more attention should be paid to how to meet the requirements. In the design iterative evolution, the pattern should be reconstructed to expand or evolve into the basis of software design, so as to improve flexibility and avoid excessive or inadequate design.

… …

After eight. The most

Software design is an art, the art of demarcating boundaries.

Software design not only belongs to the program design, more like a kind of thinking of artistic creation…

Good architectural design requires an architectural purpose and direction,

It requires the overall planning and in-depth needs of the architect, as well as abstract and evolutionary architectural design thinking.

It takes a lot of hard work and attention to detail, and a lot of foresight,

Data models need to be accurate, complete, standardized, consistent, and standardized,

You need clear boundaries, you need single responsibilities for modules, you need an information expert model,

The system needs high cohesion, low coupling, the most simplified communication link relationship,

We need to work on minimising energy consumption, maximising the entropy produced by consuming a unit of energy,

Need for systematic organization design, need team design, collaboration…

.

The quality of the software system depends more on the architecture.

It depends on the height and depth of thinking design: abstract divide and conquer, high convergence and low coupling, simplicity first, mode support, evolution and iteration…

Credible architecture, simple architecture, aesthetic art!

Click to follow, the first time to learn about Huawei cloud fresh technology ~