Talk about programming paradigms

Author: Zhang Xiaolong Yi Yan

Zhang Xiaolong, working for ZTE, is the company’s top ten Agile coach, senior software architect and author of Gomonkey. He has more than ten years of experience in software architecture and development. In recent years, she focuses on the design and development of PaaS and other large platform software, especially in DDD and micro services.

The term programming paradigm first came from Robert Floyd’s Turing Award speech in 1979. It is a programmer’s view of how programs should be built and executed. It is closely related to the way software is modeled and the architectural style.

There are three major programming paradigms:

  • Structured Programming

  • Object-oriented Programming

  • Functional Programming

The relationship between these programming paradigms is as follows:

If you already understand the relationship between programming paradigms in the figure above, there is no need to read further. Otherwise, it is advisable to sit through this article and skip familiar chapters in the process.

It is well known that computers run on Turing machines. Initially, programmers fed instructions and data through paper tape into a computer, which carried out the instructions and completed the calculations. Later, programmers write programs (including instructions and data), load the programs into a computer, and the computer executes the instructions and completes the calculation. Up to now, software has been very complex, the scale is also very large, people through software to solve problems in various domains, such as communication, embedded, banking, insurance, transportation, social, shopping and so on.

There is a big gap (What, How, Why) between domain problems and Turing machine models, and this is where programmers mainly play. Programming paradigms are the base of the programmer’s mind, determining design elements and code structure. Programmers map domain problems onto a programming paradigm and then implement them through programming languages. Obviously, the transition from the programming paradigm to the Turing model is done by the compiler, and the higher the mental base, the less the programmer has to do.

You may be wondering: Why are there multiple programming paradigms? In other words, why do programmers need multiple thought bases instead of just one?

The mind base depends on the way the programmer looks at the world, and has to do with both philosophy and psychology. Programmers develop software to simulate the real world into the computer to run, every programmer at this time is equivalent to a creator, in the computer to recreate a specific field of the world, so how to look at the world there is some taste of philosophy in it. What is the smallest structure in this virtual world? What is the relationship between each structure? In what way the virtual world layer up. As science and technology evolve, the way people look at the world changes, as biology moves to cells and science moves to atoms, so does the base on which programmers think to model the world.

The programmer’s simulation of the world must eventually run on a Turing machine, which is an economic imperative, as cheaply as possible. Resources are limited at any time, performance is constrained, and different programming paradigms have different advantages and disadvantages. Programmers need to have multiple thinking bases to make trade-offs and even merge when solving domain problems.

In order to better understand the programming paradigm, let’s review the brief history of the programming paradigm.

A brief history of programming paradigms

Machine language, which uses binary sequences of zeros and ones to express instructions, is very arcane. Assembly language, which uses mnemonics to express instructions, is an improvement over machine language, but it is still a pain to write programs. Assembly language can be machine language by assembly (compilation), and machine language can be assembly language by disassembly. Assembly language and machine language one – to – one correspondence, are directly machine-oriented low-level language, closest to Turing machine model.

From the perspective of structured programming, machine language and assembly language also have programming paradigm, and their programming paradigm is unstructured programming. Goto statements were flying around and programs were extremely difficult to maintain. Later, it was agreed that the goTO statement was harmful, and it was removed from programming language design.

With the continuous development of computer technology, people began to seek for machine-independent and user-oriented high-level languages. Programs written in the corresponding high-level language can be run on any computer model provided that the corresponding high-level language is equipped with a compiler. The first widely used high-level language is Fortran, which effectively reduces the programming threshold and greatly improves the programming efficiency. Then came the C language, which provides a more appropriate abstraction for computers, shielding many of the details of computer hardware, and is the typical representative of structured programming languages. C language is still widely used today.

With the advent of high-level languages and the expansion of the scale of the programs that people develop, how to organize them becomes a new challenge. One language that has hitched a ride on C’s wagon to bring object-oriented design into the mainstream is C++, which is fully compatible with C. For a long time, C++ was the dominant programming language in the industry. Later, the power of computer hardware was greatly improved, and the Java language came to the fore. The Java language assumes that the code space of the program is open, running on the JVM virtual machine, supporting object orientation on the one hand and GC on the other.

It is not hard to see that the development of programming languages is a process of gradually moving away from computer hardware and toward the domain problems to be solved. Therefore, the future development of programming languages is to explore how to better solve domain problems.

These languages are just the mainstream path of programming language development, but there is a less mainstream path that has been developing, and that is the functional programming language, which is represented by Lisp. First, the main theoretical basis of functional programming is Lambda calculus, which is Turing complete; Second, functional programming is abstract algebraic thinking, closer to modern natural science, using a formalized way to explain the world, deducing the world through formulas, extremely abstract (e.g. F= MA). On this path, many people are more academic in style, focusing on the elegance of the solution and how to build the abstraction layer by layer. They also explore more possibilities, and this is where the recycling mechanism was pioneered. But too far from the Turing machine model, functional programming on the Turing machine running performance do not have direct support, is limited by the hardware performance at the same time, in a very long period of time, the exploration of the road are academic circles had a small game, and functional programming is considered to be at the time an immature programming paradigm in engineering. Functional programming finally joined the mainstream of programming language development when hardware performance was no longer a hindrance and problem solving became more and more important. There are other factors contributing to functional programming’s popularity, such as multicore cpus and distributed computing.

Programming paradigms are abstract; programming languages are concrete. Programming paradigms are the ideas behind programming languages and should be embodied through programming languages. The worldview of programming paradigms is embodied in the core concepts of programming languages, the methodology of programming paradigms is embodied in the expression mechanism of programming languages, and the syntax and style of a programming language are closely related to the programming paradigms it supports. Although there is a many-to-many relationship between programming languages and programming paradigms, each programming language has its own dominant programming paradigm. For example, the dominant programming paradigm in C is structured programming, while the dominant programming paradigm in Java is object-oriented programming. Programmers can break down “dimensional walls” and incorporate good elements from different programming paradigms, such as object elements in the design of Linux kernel code. Whether object-oriented programming is introduced into structured programming languages or functional programming is introduced into object-oriented programming languages, the application of multiple paradigms in a program has become an increasingly obvious trend. It is not just in design that more and more programming languages are increasingly merging the content of different programming paradigms. C++ has supported Lambda expressions since C++ 11, Java has supported Lambda expressions since Java 8, and new languages have supported multiple paradigms from the beginning, such as Scala, Go, and Rust.

From structured programming to object-oriented programming, and then to functional programming, it is further and further away from the Turing machine model, but more and more abstract and closer to the domain problems.

Structured programming

Structured programming, also known as procedural programming, or process-oriented programming.

The basic design

In the era of low-level language programming, programmers think directly from the perspective of using instructions and are used to writing in accordance with their own logic. Data may be shared between instructions. The most convenient way to write is to goto the piece of logic needed to execute a code, and then goto another place. When the code is large, it becomes difficult to maintain, and this is unstructured programming.E.w. Dijkstra proposed structured programming in 1969, abandoned THE GOTO statement, and took modular design as the center, divided the software system to be developed into a number of mutually independent modules, so that the completion of each module becomes simple and clear. It is a good foundation for designing some larger software. According to the view of structured programming, any algorithmic function can be implemented through a combination of three basic program constructs (sequence, selection, and loop).

Structured programming is mainly manifested in the following three aspects:

  • Step by step, from the top down. Think of programming as an evolutionary process, and divide the process of problem analysis into several levels, each new level is a refinement of the previous level.

  • Modular. The system is decomposed into several modules, and each module realizes specific functions. The final system is assembled by these modules, and information is transmitted between modules through interfaces.

  • Statement structure. Only sequential, select, and loop statements are allowed in each module.

Structured programming is to use the computer’s way of thinking to deal with problems, the separation of data structure and algorithm (program = data structure + algorithm). Data structures describe the organization of the data to be processed, while algorithms describe the specific operation process. We use procedure functions to implement these algorithms step by step, and use them one by one.

Of the three main programming paradigms, structured programming comes closest to the Turing machine model. When people learn to program, most of them start with structured programming. According to structured programming when doing design, also according to instruction and state (data) two latitude to consider. In terms of instructions, process Procedure is decomposed first, and then the whole calculation is constructed through a series of relations between procedures, corresponding algorithm (flow chart) design. In terms of state, the instance data is placed in the static data area of the module in the form of global variables, corresponding to the data structure design.

Architectural style

Structured programming tends to be low-level and generally applies to system software that pursues certainty and performance. This kind of software is static planning, demand change is not frequent, suitable for multi-person parallel collaborative development. After the software is divided into layers and modules, the API between the modules is determined, and then the groups can start development at the same time. Data structure design and algorithm flow design are carried out for each group, and integration delivery is carried out within the specified time. Layered modular architecture supports large-scale parallel software development and tends to static programming development delivery. The dependency direction between layers is limited, that is, layers can only rely on each other downward, but the dependency between modules within the same layer cannot be constrained. Interdependence between modules often occurs, resulting in coarse clipping and reusability and weak response to changes.

Advantages of structured programming:

  • Close to Turing machine model, can fully mobilize the hardware, strong control. From hardware to OS, are from the Turing machine model layer. Structured programming is closer to the hard Turing machine model, allowing it to exploit the underlying capabilities and become as controllable as possible.

  • The process is clear. From the main function, you can see the code all the way to the end.

** Disadvantages of structured programming:

  • The global access of data brings high coupling complexity, poor local reusability and response to changes, and poor module testability. It is difficult to reuse a Procedure alone. It is necessary to reuse the global data related to the Procedure function, other process functions related to the global data (life cycle association) and other data (pointer variable association) together. However, this process is implicit, and it can only be done by chasing the code bit by bit. Similarly, it is also difficult to modify a single Procedure, and it is often necessary to synchronize all associated PROCEDURES, namely, the scatological modification. Another point is that there may be data coupling between modules and piling complexity is high, which makes it difficult to test separately.

  • With the expansion of software scale, structured programming has become a rigid way to organize programs. Structured programming is close to the Turing machine model, which shows that structured programming has poor abstraction ability, is far from domain problems, and can not find the direct mapping of domain concepts in the code, so it is difficult to organize and manage large-scale software.

Just mentioned in the advantages, structured programming close to the Turing machine model, can fully mobilize the hardware, strong control. Why do we need this control? You’ve probably done performance optimization for embedded systems, and you know how important control is. You may be the optimized version of the binary size, also may be the optimized version of the memory usage, and may be optimized version of the runtime efficiency, at this time if you stand in the hardware to think how to run the best optimization method, so the gap with the Turing machine model is very small, it’s easy to find a good optimization method to implement strong control, Otherwise, there are many layers of abstraction in the middle, and it is difficult to find a good optimization method.

In addition to performance, determinism is also important for system software. For 5G, the system requires an end-to-end delay of no more than 1ms. We can’t have a delay of 0.5ms for 80% of cases and 2ms for 20% of cases. We can’t sell a piece of hardware and promise 2000 users. 80% of the time we can support 3000 users and 20% of the time we can only support 1000 users. Static programming is highly desirable in some system software. This certainty requires a good static decomposition of the underlying Turing machine model, and then mapping our programs from memory to instructions and data. Because structured programming is close to the Turing machine model, the gap of the mapping is relatively small and it is easy to achieve this certainty through static programming.

Object-oriented programming

With the increasing of software types and the expansion of software scale, people hope to reuse and cut software in smaller granularity.

The basic design

Take global data apart and place the data and its tightly coupled methods within a logical boundary called an object. The user can only access the public methods of the object, not the data inside the object. Objects naturally encapsulate data and methods within a logical boundary and can be reused as a whole without any clipping or implicit association.

People began to map domain problems to entities and relationships (programs = entities + relationships) rather than data structures and algorithms (procedures). This was object-oriented programming, with encapsulation, inheritance and polymorphism at its core.

Encapsulation is the foundation of object orientation, which brings together closely related information into a logical unit. We hide data, encapsulate based on behavior, minimize interfaces, and don’t expose implementation details.

There are two types of inheritance, namely implementation inheritance and interface inheritance. Implementation inheritance looks at things from the perspective of subclasses, whereas interface inheritance looks at things from the perspective of superclasses. Many programmers use inheritance as a way to reuse code, but this is not a good way to reuse code. Composition is recommended.

For object oriented, polymorphism is very important, and interface inheritance is a common way to implement polymorphism. Because of polymorphism, software design is more flexible and better able to adapt to future changes. Programming that uses only encapsulation and inheritance is called object-based programming, and only when polymorphism is added can it be called object-oriented programming. It can be said that the core of object-oriented design is polymorphic design.

Object-oriented modeling

After object-oriented programming came into being, programmers had to map from domain problems to entities and relationships, and the subsequent mapping to Turing machine models was left to the compiler of object-oriented programming languages. So the question is, how do you map domain problems to entities and relationships efficiently and succinctly? At this time, UML (Unified Model Language) emerged as a standardized modeling Language composed of a set of diagrams. It can be seen that object-oriented greatly promotes the development of software modeling.For new programmers who are not familiar with UML, it is recommended to master at least two UML diagrams, class diagrams and sequence diagrams:

  • Class diagrams are static views that represent classes and structures

  • Sequence diagrams are dynamic views of objects and interactions

Software design generally starts with dynamic diagrams, in which relatively fixed patterns are sunk into static views in dynamic interactions, and classes and structures are formed. When you look at the code, you know a little bit about the object and the interaction from the class and structure, and you can constrain and validate the relationship between the object and the interaction. Object-oriented modeling is generally divided into four steps:

  • Requirements analysis modeling

  • Object-oriented Analysis (OOA)

  • Object Oriented Design (OOD)

  • Object-oriented Coding (OOP)

In the OOA phase, the analyst produces the analysis model. Similarly, in the OOD phase, designers produce design models.



The separation of the analysis model from the design model leads to a mismatch between the business model in the analyst’s mind and the business model in the designer’s mind, often with mapping. With refactoring and bug fixes, the design model evolves and the analysis model diverges more and more. Sometimes, if the analyst thinks a requirement is easier to implement from the perspective of the analysis model and the designer thinks the requirement is harder to implement from the perspective of the design model, then both sides have a hard time understanding the other’s model. In the long run, there is a fatal disconnect between the analysis model and the design model, and the knowledge gained from any activity is not available to the other.

Eric Evans published domain-Driven Design (DDD) in 2004. Domain-driven Design: Solutions to Software Core Complexity abandoned the separation of analysis model and Design model, and found a single model to meet the requirements of both aspects. This is the domain model. The real complexity of many systems lies not in the technology but in the domain itself, in the business users and the business activities they perform. Without a deep understanding of the domain at design time, and without a clear expression of complex domain logic in the form of a model, no matter how advanced and popular the platforms and infrastructure we use, the true success of the project cannot be guaranteed.

DDD is an evolution of object-oriented modeling that focuses on building the right domain model:



The essence of DDD is the division and control of boundaries. There are four boundaries:

  • The first boundary is the separation of subdomains in the problem space, including core domain, support domain and general domain

  • The second boundary is to disintegrate BCS (Bounded contexts) in the solution space, and the collaboration between BCS is expressed through Context Mapping

  • The third boundary is the separation of business and technical complexity within BC, forming a layered architecture consisting of the user interface layer, application layer, domain layer, and infrastructure layer

  • The fourth boundary is the introduction of aggregation, the smallest design unit, at the domain level, which effectively isolates the domain model from completeness and consistency. The aggregation includes design elements such as entities, value objects, domain services, factories, and warehouses

Design principles and patterns

There are many design principles, and the one most commonly used by programmers is the SOLID principle, which is a relatively systematic set of design principles. Not only can it guide us in designing modules (classes), it can also be used as a ruler to measure the effectiveness of our design.

SOLID Principles is an acronym for five design principles, which are:

  • Single responsibility Principle (SRP) : A class should have one and only one reason for change

  • The open-Closed principle (OCP) : Software entities (classes, modules, functions) should be Open to extension and closed to modification

  • Liskov Substitution Principle: Subtypes must be able to replace their parent base types

  • Interface Segregation Principle (ISP) : Users should not be forced to rely on methods they do not use

  • Dependency Inversion principle (DIP) : high-level modules should not depend on low-level modules, both should depend on abstractions; Abstractions should not depend on details; details should depend on abstractions

As mentioned earlier, the core of object-oriented design is polymorphic design. Let’s look at how SOLID principles guide polymorphic design:

  • Single responsibility principle: Separate change from immutable and isolate change through interfaces

  • Open closed principle: The goal of polymorphism is to extend, not modify, the system to change

  • Richter’s Substitution principle: Interface design should achieve the perfect effect of hiding details

  • Interface isolation principle: Separate interfaces for different customers

  • Dependency inversion principle: the designer and specification of the interface should be the user of the interface

In addition to design principles, we also need to master common design patterns. Design patterns are specific solutions to some common problems that make object-oriented design more flexible and elegant, and thus more reusable. Learning design patterns is not only about learning how to write code, but also about the application scenarios of patterns. Whatever the design pattern, there are some “eternal truths” behind it, and these truths are design principles. Indeed, what could be more important than principle? Like a person’s world view and outlook on life, that’s what governs all your actions. It can be said that design principle is the soul of design pattern.

Defection is a gradual learning method in martial arts:

  • Step 1 — Follow the rules until you understand them well enough to make them habitual

  • Step 2 – break, reflect on the rules, look for exceptions to the rules and “break” the rules

  • Step 3 — separation. After mastering the rules, you will basically break away from the rules and grasp their essence and deep energy

Learning design patterns is also a process of breaking down:

  • The first step – keep, in the design and application of the existing design pattern, in the imitation to learn to think

  • Step 2 – break, after mastering the basic design patterns, create new design patterns

  • Step 3 – Leave, forget all design patterns, in the design of subtle use

Architectural style

After the popularity of object-oriented design, componentized or service-oriented architectural styles became popular. Componentized or servified architectural styles refer to object design: objects have a life cycle, are logical boundaries, and provide apis; A component or service also has a life cycle, which is a logical boundary and provides apis. In this architecture, application dependencies lead to principles that depend on abstractions at both high and low levels, as if the entire hierarchical architecture had been bulldozed, with no relationship between upper and lower levels. Different customers interact with the system in an “equal” way, do you need new customers? No problem, just add a new adapter to convert customer input into parameters understood by the system API. At the same time, for each specific output, a newly created adapter is responsible for the transformation.

** Advantages of object-oriented programming:

  • Objects encapsulate data and behavior, facilitating understanding and reuse.

  • Object as “stable design material”, suitable for wide use.

  • Polymorphism improves the ability to respond to change, further increasing the scale of software.

  • Understanding and evolving design takes precedence over understanding and adjusting models and structures. Don’t look at the code first, object-oriented code looks very easy to break, such as virtual interface, can not follow. It is common to master the model and structure and then open up the code at a point within the structure for review and modification. Remember, first model, then interface, then implementation.

Disadvantages of object-oriented programming:

  • Business logic is fragmented and scattered among discrete objects. Classes are designed to follow the single responsibility principle, jumping around in multiple classes to complete a business process.

  • The mismatch between behavior and data is called the anaemic model versus the hyperemic model. It was later discovered that the DCI (Data, Context, and Interactive) architecture could solve this problem.

  • Object-oriented modeling relies on engineering experience and lacks strict theoretical support. Object-oriented modeling answers questions about how to map to object models from the domain, but it is generally just typical cases or best practices of OOA and OOD, which belong to the category of induction, without strict mathematical derivation and proof.

Functional programming

Unlike structured and object-oriented programming, functional programming is unfamiliar to many people. As you probably know, Lambda expressions have been introduced in C++ and Java to support functional programming. A function in functional programming is not a function in structured programming, but a function in mathematics. A function in structured programming is a Procedure.

The basic design

The origin of functional programming is the mathematician Alonzo Church’s invention of Lambda calculus (Lambda calculus, also known as λ-calculus). So Lambda is a word that comes up a lot in functional programming, and you can think of it simply as an anonymous function.

** Functional programming has many features:

  • The function is a first-class citizen. First-class citizenship :(1) it can be created on demand; (2) It can be stored in data structures; (3) It can be passed as an argument to another function; (4) It can be treated as a return value of another function.

  • Pure functions. A pure function is one that :(1) returns the same output for the same input; (2) No side effects.

  • Lazy evaluation. Lazy evaluation is an evaluation strategy that delays evaluation until the value is really needed.

  • Immutable data. The invariance of functional programming is mainly reflected in values and pure functions. Values are similar to value objects in DDD and, once created, cannot be modified unless recreated. Values are guaranteed not to modify data explicitly, and pure functions are guaranteed not to modify data implicitly. As you get deeper into functional programming, you’ll come across terms like side-effect free, stateless, and referential transparency, which are all about immutability.

  • Recursion. Functional programming uses recursion, usually tail recursion, as a mechanism for flow control.

Functional programming also has two important concepts: higher-order functions and closures. A higher-order function is a special function that takes a function as input or returns a function as output. A closure is an entity composed of a function and its associated reference environment, i.e., closure = function + reference environment.

Closures have an independent life cycle and can capture context (environment). From an object-oriented programming perspective, closures are objects with only one interface (method), taking the single responsibility principle to its extreme. As you can see, closures are smaller in design granularity, cheaper to create, and easier to do composite designs. In object-oriented programming, the design granularity is an Object, it may need to be disassembled, but you may not have the consciousness to disassemble it, then the God class large Object will exist, and it will be expensive to create. In functional programming, closures give you the ability to design more elaborate, context-capturing atomic objects with a single interface that have an independent life cycle, that are naturally easy to assemble, easy to reuse, and easy to adapt to change.

Closures are poor objects, and objects are poor closures. Some languages don’t have closures, so you have no choice but to simulate closures with objects. Other languages have no objects, but a single interface cannot fully express a business concept, and you have no choice but to combine closures as objects.

For functional programming, the data is immutable, so Turing calculations can generally only be done through pattern matching and recursion. When programmers choose functional programming as their thinking base, they need to figure out how to map domain problems to data and functions (programs = data + functions).

The idea of functional design is high-order functions and combinations, behind which is a set of logic of abstract algebra. Here is a graph of higher-order functions, with functions as inputs on the left and functions as outputs on the right:

For higher-order functions that take functions as input, this is the object-oriented strategy pattern. For higher-order functions that take functions as output, this is the object-oriented factory pattern. Each higher-order function has a single responsibility, so functional design combines everything like object-oriented in an atomic way through policy patterns and factory patterns. In this process, which functions are used as input parameters, which functions are used as return values, which functions are passed to, and which functions are returned…… You find that you are nesting formulas, nesting layers of formulas to complete the description of an algorithm, so the core is to design the higher-order functions and the rules for their combination, which is the hardest part of functional design, which is the abstract algebra part. It can be seen that the basic method of functional design is to complete the mapping description of data from source to result through rule series design with the help of standardization of single interface of closure and combinability of higher-order functions. The mapping is done through a formal combination of higher-order functions, where the description is written as if it were a mathematical formula, waiting for the source data to come in at one end, and then working through the layers of functional formulas to get the result you want. The formal transfer of data involves not only the data itself, but also the creation, return, and delivery of rules.

Architectural style

As we discussed earlier, functional programming is gaining attention for hardware performance improvements, multi-core cpus, and distributed computing. Some features of functional programming make concurrent programs easier to write. Some architectural styles, especially those of distributed systems, take advantage of functional features to make systems more extensible and resilient.

Functional programming is modeled in abstract algebra, with two architectural styles derived at the top:

(1) Event Sourcing, Reative Achitecture(2) Lambda Achitecture, FaaS, Serverless

Drawing on the concept of functional programming and the architecture style of distributed system, we can achieve a higher level of abstraction in the architecture level and better flexibility and reliability in the concurrency level.

** Benefits of functional programming:

  • High level of abstraction and easy to extend. Functional programming is a data-based representation, very abstract and easily extensible within the scope of the representation.

  • Declarative and easy to understand.

  • Formal verification, easy to self – proof.

  • Immutable state for easy concurrency. Immutable data is not a requirement for concurrency, not sharing data is, but immutable makes concurrency much easier.

** Disadvantages of functional programming:

  • Algebraic modeling of problem domain has high threshold and limited application domain. Reality is complex, not self-consistent in every respect, and it is very difficult to find a complete set of rule mappings. In some narrow fields, you can find it, but if you extend it, you break that narrow field, and you find that the abstract algebraic modeling that you found before is no longer applicable.

  • Poor performance on Turing machines. Functional programming adds many intermediate layers, whose rule descriptions and lazy evaluation make optimization difficult.

  • Immutable constraints create data slime coupling. Domain objects are stateful, and these states can only be passed through functions, resulting in many functions having the same input parameters and return values.

  • Closure interfaces are so fine-grained that they often need to be recomposed to form business concepts.

summary

As a programmer, we should be aware of the applicable scenarios of each programming paradigm and choose the appropriate paradigm to solve the problem appropriately in a given scenario. Design suggestions for multi-paradigm integration:

  • Each programming paradigm has its advantages and disadvantages, not being a single paradigm paw, choosing the right paradigm flexibly according to the scenario and properly solving problems

  • From the perspective of DDD, the design of different paradigms is divided into different subdomains, BCS, or layers according to model consistency

Finally, let’s revisit the relationship between programming paradigms we started with:



The instructions are as follows.

In the early days of unstructured programming, instructions could be skipped and data could be referenced. Later, with the development of structured programming, people removed the GOTO statement, restricting the direction of instructions. Procedures are one-way, but data can be accessed globally. When it comes to object-oriented programming, people simply put data and its tightly coupled methods in a logical boundary, restrict the scope of the data, rely on the relationship to find. Finally, functional programming constrained the variability of the data by arranging the mapping rules from source to target through a series of functions, in which it was stateless. So, from the left to the right, we’re constrained all the way.

The farther to the left, the less restrictive it is, the closer it is to the Turing machine model, allowing for the full use of hardware, “direct” controllability and wide-area applicability. For controllability, because it is close to the Turing machine model, it can be “directly” controlled according to its own ideas. For wide-area applicability, because the more constraints there are, the higher the bar is, once you don’t get the right hand side right, you can take a step back, and when you find a reasonable object model or abstract algebraic model, you can take another step forward.

The more restrictions you go to the right, the more rules are established through constraints, the more rules describe the system, and the localized extensibility brought about by “abstraction”. In the case of localization, the object model or abstract algebraic model can be very extensible and understandable because this “abstraction” must be oriented to a narrow slice, but once the scope is exceeded, the model may be invalid, so DDD has been emphasizing separation of subdomains, partition of BCS, and layered architecture.

The resources

  • C++ and system software technology conference 2020, Modern C++ software design with multi-paradigm convergence, wang bo

  • Geek Time column, The Beauty of Software Design, Zheng Ye