Background of software design patterns

In 1995, ErichGamma, Richard Helm, Ralph Johnson, John Vlissides, and other authors collaborated to publish design patterns: Design Patterns: Elements of Reusable Object-oriented Software, a landmark event in the field of Design Patterns that led to breakthroughs in Software Design Patterns. The Four authors are also known in the software development world as their Gang of Four (GoF).

The concept and significance of software design patterns

There are many definitions of software design patterns, some from the characteristics of patterns, some from the role of patterns. The definitions given in this tutorial are generally accepted by most scholars and are illustrated in two ways.

1. The concept of software design patterns

Software Design Pattern, also known as Design Pattern, is a set of repeatedly used, most people know, after classification and cataloging, code Design experience summary. It describes some recurring problems in the software design process, and the solutions to this problem. That is to say, it is a series of routines to solve a specific problem. It is a summary of the code design experience of predecessors. It has a certain universality and can be used repeatedly. The goal is to improve code reusability, code readability, and code reliability.

2. Learn the meaning of design patterns

The essence of design pattern is the practical application of object-oriented design principles, and it is a full understanding of the encapsulation, inheritance and polymorphism of classes, as well as the association and combination of classes. Using design patterns correctly has the following advantages:

  • Can improve the programmer’s thinking ability, programming ability and design ability.

  • Make the program design more standardized, code preparation more engineering, so that the software development efficiency is greatly improved, thus shortening the software development cycle.

  • So that the design of the code can be reused, readability, high reliability, good flexibility, maintainability.

Of course, software design patterns are only a guide. In the specific software development, it must be chosen appropriately according to the characteristics and requirements of the designed application system. For simple program development, it may be easier to write a simple algorithm than to introduce a design pattern. But for large project development or framework design, it’s better to use design patterns to organize your code.

Basic elements of software design patterns

Software design patterns, which make it easier to reuse successful designs and architectures, usually contain the following basic elements: Pattern names, aliases, motivations, problems, solutions, effects, structures, pattern roles, partnerships, implementation methods, applicability, known applications, routines, pattern extensions, and related patterns, among which the most critical elements include the following four main components.

1. Schema name

Each pattern has its own name, usually described in one or two words, and can be named based on the problem, features, solutions, capabilities, and effects of the pattern. Patternnames help us understand and remember the pattern, as well as discuss our designs.

Problem 2.

The Problem describes the context in which the pattern is applied, that is, when it is used. It explains the design problem and the cause and effect of its existence, as well as a set of prerequisites that must be met.

3. Solutions

The Solution to the pattern problem includes the design components, their interrelationships, and their respective responsibilities and ways of working together. Because a pattern is like a template that can be applied to many different situations, the solution does not describe a specific and concrete design or implementation, but rather provides an abstract description of the design problem and how to solve it with a general-sense combination of elements (classes or objects).

Effect of 4.

It describes the application effects of the pattern and the trade-offs that should be made when using the pattern, namely the advantages and disadvantages of the pattern. It mainly measures the time and space, as well as the influence of the model on the flexibility, expansion and portability of the system, and also considers its implementation. Explicitly listing the consequences goes a long way toward understanding and evaluating these patterns.

Seven principles of object-oriented design

In software development, in order to improve the maintainability and reusability of the software system, and increase the scalability and flexibility of the software, programmers should try to develop programs according to the seven principles, so as to improve the efficiency of software development, save software development costs and maintenance costs. We will introduce each of these seven principles in turn below.

(1) Open and close principle

1. Definition of open and close principle

The Open Closed Principle (OCP) was developed by Bertrand Meyer, In his 1988 book Object Oriented Software Construction, he proposed: Software entities (including modules, classes, interfaces, and methods that are partitioned into the project) should be open for extension, but closed for modification. That’s the classic definition of the open close principle.

It means that when the requirements of the application change, the function of the module can be extended to meet the new requirements without modifying the source code or binary code of the software entity.

2. The role of the open and close principle

The open and close principle is the ultimate goal of object-oriented programming, which makes software entities have certain adaptability and flexibility as well as stability and continuity. Specifically, its role is as follows.

  • Impact on software testing

If the software follows the open closed principle, only the extended code needs to be tested during software testing, because the original test code still works.

  • You can improve your code’s reusability

The smaller the granularity, the more likely it is to be reused; In object-oriented programming, programming according to atoms and abstractions can improve code reusability.

  • Can improve software maintainability

Software that follows the open and closed principle is more stable and continuous, making it easier to expand and maintain.

3. Implementation method of open and close principle

The open closed principle can be implemented by “abstracting constraints and encapsulating changes”, that is, defining a relatively stable abstraction layer for software entities through interfaces or abstract classes, while encapsulating the same variable elements in the same concrete implementation class.

Because abstraction has good flexibility and wide adaptability, as long as abstraction is reasonable, it can basically maintain the stability of software architecture. The changeable details in software can be extended from the abstract implementation class. When the software needs to change, it only needs to derive an implementation class to extend according to the requirements.

(2) Richter’s substitution principle

1. Definition of Richter’s substitution principle

Liskov Substitution Principle, LSP was proposed by Ms. Liskov of the MIT Computer Science Laboratory in a paper called “Data Abstraction and Hierarchy” published at the 1987 OOPSLA Summit, She asked: Inheritance should ensure that any property proved about supertype objects also holds for subtypes Objects).

Richter’s substitution principle mainly describes some principles about inheritance, that is, when inheritance should be used, when inheritance should not be used, and the underlying principles. Richter substitution is the basis of inheritance reuse, which reflects the relationship between base class and subclass, is a supplement to the open and closed principle, and is the specification of concrete steps to achieve abstraction.

2. The role of Richter’s substitution principle

The main functions of Richter’s substitution principle are as follows.

  • Richter’s substitution principle is one of the important ways to realize open – close principle.

  • It overcomes the disadvantage of poor reusability caused by overwriting the parent class in inheritance.

  • It is the guarantee of the correctness of the action. That is, class extensions do not introduce new errors into existing systems, reducing the likelihood of code errors.

3. The implementation method of Richter’s substitution principle

In plain English, the Richter substitution principle says that a subclass can extend the functionality of its parent class, but cannot change the functionality of its parent class. In other words, when a subclass inherits from a parent class, try not to override the methods of the parent class, except to add new methods to accomplish new functions.

If the method of rewriting the parent class to complete the new function, although it is simple to write, but the reusability of the whole inheritance system will be relatively poor, especially when the use of more frequent polymorphism, the probability of program running error will be very large.

If a program violates the Richter substitution principle, an object that inherits from a class will get a runtime error where the base class appears. At this time, the correction method is to cancel the original inheritance relationship and redesign the relationship between them. The most famous example of Richter’s substitution is “a square is not a rectangle”. Of course, there are many similar examples in life, for example, penguins, ostriches and kiwis are biologically divided, they belong to birds; However, from the perspective of class inheritance, they cannot be defined as a subclass of “bird” because they cannot inherit the function of “bird” to fly.

(3) Reliance inversion principle

1. Definition of dependency inversion principle

Dependence Inversion Principle (DIP) is an article by Robert c. Martin, president of Object Mentor, published in C++ Report in 1996. The dependency inversion principle was originally defined as: a high-level module should not depend on a low-level module, but both should depend on its abstraction; Abstraction should not depend on details, Details should depend upon abstractions (High level modules shouldnot depend upon low level modules. Both should depend upon abstractions. Abstractions Should not depend upon details. Details should depend upon Abstractions). The core idea is: program to the interface, not to the implementation. Dependency inversion principle is one of the important ways to realize the open closed principle, which reduces the coupling between the customer and the implementation module. Because in software design, detail is variable and abstraction layers are relatively stable, an architecture based on abstraction is much more stable than one based on detail. Here, abstraction refers to an interface or abstract class, while detail refers to a concrete implementation class. The purpose of using interfaces or abstract classes is to create specifications and contracts that do not involve any concrete operations, leaving the task of presenting the details to their implementation classes.

2. Reliance on the inversion principle

The main functions of the dependency inversion principle are as follows:

  • The dependency inversion principle can reduce the coupling between classes.

  • The dependence inversion principle can improve the stability of the system.

  • The dependency inversion principle can reduce the risks associated with parallel development.

  • The dependency inversion principle improves code readability and maintainability.

3. Implementation method of dependency inversion principle

The purpose of the dependency inversion principle is to reduce the coupling between classes by programming to the interface, so we can satisfy this rule in projects by following four points in real programming.

  • Each class tries to provide an interface, an abstract class, or both.

  • Variables should be declared as interfaces or abstract classes.

  • No class should derive from a concrete class.

  • Follow the Richter substitution principle when using inheritance.

(4) Single responsibility principle

1. Definition of the single responsibility principle

Single Responsibility Principle (SRP), also known as Single function Principle, was formulated by Robert C. Robert C. Martin in Agile Software Development: Principles, Patterns, and Practices. The single responsibility principle states that There should never be more than one reason for a class to change. This principle states that objects should not take on too many responsibilities. If an object takes on too many responsibilities, there are at least two disadvantages:

  • Changes to one responsibility may impair or inhibit the class’s ability to implement other responsibilities;

  • When a client needs a responsibility for the object, it has to include all the other responsibilities that are not needed, resulting in redundant code or wasted code.

2. Single responsibilityThe role of Principles

The core of the single responsibility principle is to control the granularity of classes, decouple objects, and improve their cohesion. Following the single responsibility principle has the following advantages:

  • Reduce class complexity. The logic of a class having one responsibility is certainly much simpler than having multiple responsibilities.

  • Improve the readability of classes. As complexity decreases, readability increases.

  • Improve system maintainability. Improved readability makes it easier to maintain.

  • Risk reduction due to change. Change is inevitable, and if the single responsibility principle is followed well, when you modify one function, you can significantly reduce the impact on other functions.

3. Single responsibilityImplementation of principles

The single responsibility principle is the simplest and most difficult principle to apply, requiring the designer to discover the different responsibilities of a class, separate them, and encapsulate them into different classes or modules. The multiple responsibilities of discovery classes require designers to have strong analytical design ability and relevant refactoring experience.

(5) Interface isolation principle

1. Define the interface isolation principle

The Interface Segregation Principle (ISP) requires programmers to break bloated interfaces into smaller, more specific interfaces, so that the interfaces contain only the methods that the customer is interested in. In 2002, Robert C. Martin defined the “interface isolation principle” as: Clients should not be forced to depend on methods they do not use. There is another definition of this principle: The dependency of one class to another one should depend on The smallest possible interface. What these two definitions mean is that instead of trying to create a huge interface for all the classes that depend on it, create a special interface for each class. The interface isolation principle and the single responsibility principle, both intended to improve the cohesion of classes and reduce coupling between them, embody the idea of encapsulation, but they are different:

  • The single responsibility principle focuses on responsibility, while the interface isolation principle focuses on isolation of interface dependencies.

  • The single responsibility principle is mainly a constraint class, which is targeted at the implementation and details in the program; The interface isolation principle mainly constrains interfaces and is aimed at abstraction and the construction of the overall framework of the program.

2. Interface segregationThe role of Principles

The interface isolation principle is used to constrain interfaces and reduce the dependency of classes on interfaces. It has the following five advantages.

  1. Splitting a bloated interface into multiple small-grained interfaces can prevent the proliferation of external changes and improve the flexibility and maintainability of the system.

  2. Interface isolation improves system cohesion, reduces external interactions, and reduces system coupling.

  3. If the interface granularity is properly defined, the system stability can be guaranteed. However, if the definition is too small, it will lead to too many interfaces and complicate the design. If the definition is too large, flexibility is reduced and customization services cannot be provided, bringing unpredictable risks to the overall project.

  4. The use of multiple specialized interfaces also provides a hierarchy of objects, since the overall interface can be defined through interface inheritance.

  5. Reduces code redundancy in project engineering. A large interface that is too large often places a lot of unnecessary methods inside it, forcing redundant code to be designed when implementing the interface.

3. Interface segregationImplementation of principles

When applying the interface isolation principle, it should be measured according to the following rules:

  • Keep interfaces small, but limited. An interface serves only one submodule or business logic.

  • Customize services for classes that depend on interfaces. Provide only the methods needed by the caller and mask those that are not.

  • Understand the environment and refuse to follow blindly. Each project or product has selected environmental factors, and depending on the environment, the criteria for interface separation will vary to gain insight into the business logic.

  • Improve cohesion and reduce external interactions. Make the interface do the most with the fewest methods.

(6) Demeter’s rule

1. Definition of Demeter’s rule

The Law of Demeter (LoD) is also known as the Least Knowledge Principle (LKP), Born in 1987 as a Northeastern University research project called Demeter, proposed by Ian Holland and popularized by Booch, one of the founders of UML, He later became known as The Pragmatic Programmer in his classic book. Demeter’s Law is defined as: Talk only to your immediate friends and not to strangers. The implication is that if two software entities do not communicate directly, then direct calls to each other should not occur and can be forwarded by a third party. Its purpose is to reduce the degree of coupling between classes and improve the relative independence of modules. The “friend” in Demeter’s law refers to the current object itself, its member object, the object created by the current object, the method parameters of the current object, etc. These objects are associated, aggregated or combined with the current object, and can directly access the methods of these objects.

2. Advantages of Demeter’s rule

Demeter’s law requires to limit the width and depth of communication between software entities. The correct use of Demeter’s law will have the following two advantages.

  • The coupling degree between classes is reduced and the relative independence of modules is improved.

  • As the affinity is reduced, the class reusability and system scalability are improved.

However, the excessive use of Demeter’s rule will make the system produce a large number of mediation classes, which will increase the complexity of the system and reduce the communication efficiency between modules. Therefore, the application of Demeter’s rule requires repeated trade-offs to ensure high cohesion and low coupling as well as clear system structure.

3. Realization method of Demeter’s rule

According to the definition and characteristics of Demeter’s rule, it emphasizes the following two points:

  • From a dependent’s point of view, only rely on what should be relied on.

  • From the perspective of the dependent, only expose the method that should be exposed.

So here are six things to keep in mind when applying Demeter’s rule:

  • In class partitioning, you should create weakly coupled classes. The weaker the coupling between classes, the better the goal of reuse.

  • In the structure design of the class, the access rights of class members should be reduced as far as possible.

  • In class design, the priority is to make a class immutable.

  • Keep references to other objects to a minimum on references to other classes.

  • Instead of exposing the class’s attribute members, the corresponding accessors (set and GET methods) should be provided.

  • Use Serializable with caution.

(7) Principle of composite reuse

1. Definition of synthetic reuse principle

Composite Reuse Principle (CRP) is also called Composition/Aggregate Reuse Principle (CARP). It requires that in software reuse, we should first use association relation such as combination or aggregation, and then consider using inheritance relation. If inheritance is to be used, the Richter substitution principle must be strictly followed. The principle of composite reuse and the Principle of Richter’s substitution complement each other.

2. Synthesis of reuseThe Importance of Principles

Generally, class reuse is divided into inheritance reuse and composite reuse. Although inheritance reuse has the advantages of simplicity and easy implementation, it also has the following disadvantages:

  • Inheritance reuse breaks class encapsulation. Because inheritance exposes the implementation details of the parent class to the child class, the parent class is transparent to the child class, so this reuse is also known as “white box” reuse.

  • A subclass has a high degree of coupling with its parent. Any change in the implementation of the parent class results in a change in the implementation of the subclass, which is not conducive to the extension and maintenance of the class.

  • It limits the flexibility of reuse. An implementation inherited from a parent class is static and defined at compile time, so it cannot be changed at run time.

In combination or aggregate reuse, existing objects can be incorporated into the new object and become a part of the new object. The new object can invoke the functions of the existing object, which has the following advantages:

  • It maintains class encapsulation. Because the internal details of component objects are invisible to the new object, this reuse is also known as “black box” reuse.

  • Poor coupling between old and new classes. This reuse requires fewer dependencies, and the only way a new object can access a component object is through its interface.

  • High flexibility of reuse. This reuse can occur dynamically at run time, with new objects dynamically referencing objects of the same type as component objects.

3. Implementation method of synthetic multiplexing principle

The principle of composite reuse is to integrate the existing object into the new object as a member object of the new object, the new object can call the function of the existing object, so as to achieve reuse.

We’ve covered seven object-oriented design principles, and the next article will cover singletons and multiinstances in the creation pattern.


Thanks for reading!



If you like this article, welcome to “ISevena”.