The author | YanHao

New retail product | alibaba tao technology

DDD – Domain Primitive

The word scaffolding comes from Architecture, which means “building” and “structure” in civil engineering. Scaffolding also includes scaffolding, meaning fixed structures that can be set up quickly. Today, application architecture refers to the fixed code structure, design patterns, specifications, and communication between components in a software system. Architecture is the most important first step in application development, because a good architecture makes the system safe, stable, and rapid iteration. By specifying a fixed architecture design in a team, students with different abilities in the team can have a unified development standard, reduce communication costs, improve efficiency and code quality.

When designing an architecture, a good architecture should achieve the following goals:

Framework independence: An architecture should not depend on an external library or framework, and should not be tied to the structure of the framework.

2. UI independence: The style of the foreground presentation may change from time to time (web today, Console tomorrow, standalone app the next day), but the underlying architecture should not change with it.

3. Be independent of the underlying data sources: Whether you use MySQL, Oracle, MongoDB, CouchDB, or even file systems today, the software architecture should not change dramatically depending on how the underlying data is stored.

4. Independence from external dependencies: No matter how external dependencies change or upgrade, the core logic of the business should not change significantly.

5. Testable: No matter what database, hardware, UI, or service the external relies on, the business logic should be able to be verified quickly.

It’s like a building in architecture. A good building should stand, and be assured that it won’t fall, no matter who’s inside, what kind of activities are going on, or what kind of storms are outside. However, when we are doing business research and development today, we tend to pay more attention to some macro architectures, such as SOA architecture and micro-service architecture, but ignore the architecture design inside the application, which easily leads to the code logic chaos, difficult to maintain, easy to produce bugs and difficult to find. Today, I hope to demonstrate a high-quality DDD architecture through case analysis and reconstruction.

Case analysis

Let’s start with a simple case where the requirements are as follows:

Users can transfer money to another account through the bank’s web site, supporting cross-currency transfers.

At the same time, due to supervision and reconciliation requirements, this transfer activity needs to be recorded.

Once this requirement is in hand, a developer may go through some technical selection, which may ultimately unravel the requirement as follows:

1. Find the accounts to be transferred out and transferred in from MySql database, and choose mapper of MyBatis to implement DAO; 2. Get the exchange rate information for the transfer from the exchange rate service provided by Yahoo (or other channels) (underlying is the HTTP open interface);

3. Calculate the amount to be transferred, ensure that the account has enough balance and does not exceed the daily transfer limit;

4. Realize roll-in and roll-out operations, deduct the commission fee, and save the database;

5. Send Kafka audit messages for auditing and account reconciliation;

And a simple code implementation is as follows:

public class TransferController {

    private TransferService transferService;

    public Result<Boolean> transfer(String targetAccountNumber, BigDecimal amount, HttpSession session) {
        Long userId = (Long) session.getAttribute("userId");
        return transferService.transfer(userId, targetAccountNumber, amount, "CNY");
    }
}

public class TransferServiceImpl implements TransferService {

    private static final String TOPIC_AUDIT_LOG = "TOPIC_AUDIT_LOG";
    private AccountMapper accountDAO;
    private KafkaTemplate<String, String> kafkaTemplate;
    private YahooForexService yahooForex;

    @Override
    public Result<Boolean> transfer(Long sourceUserId, String targetAccountNumber, BigDecimal targetAmount, String targetCurrency) { // 1. Read data from the database, ignoring all verification logic such as account existencesourceAccountDO = accountDAO.selectByUserId(sourceUserId); AccountDO targetAccountDO = accountDAO.selectByAccountNumber(targetAccountNumber); // 2. Verify service parametersif(! targetAccountDO.getCurrency().equals(targetCurrency)) { throw new InvalidCurrencyException(); Exchange rate = 1; exchange rate = 1source currency = X target currency
        BigDecimal exchangeRate = BigDecimal.ONE;
        if (sourceAccountDO.getCurrency().equals(targetCurrency)) {
            exchangeRate = yahooForex.getExchangeRate(sourceAccountDO.getCurrency(), targetCurrency);
        }
        BigDecimal sourceAmount = targetAmount.divide(exchangeRate, RoundingMode.DOWN); // 4. Verify service parametersif (sourceAccountDO.getAvailable().compareTo(sourceAmount) < 0) {
            throw new InsufficientFundsException();
        }

        if (sourceAccountDO.getDailyLimit().compareTo(sourceAmount) < 0) { throw new DailyLimitExceededException(); } // 5. Evaluate the new value and update the field BigDecimal newSource =sourceAccountDO.getAvailable().subtract(sourceAmount);
        BigDecimal newTarget = targetAccountDO.getAvailable().add(targetAmount);
        sourceAccountDO.setAvailable(newSource); targetAccountDO.setAvailable(newTarget); // 6. Update database accountdao.update (sourceAccountDO); accountDAO.update(targetAccountDO); // 7. Send audit messages String message =sourceUserId + "," + targetAccountNumber + "," + targetAmount + "," + targetCurrency;
        kafkaTemplate.send(TOPIC_AUDIT_LOG, message);

        return Result.success(true); }}Copy the code

We can see that a piece of business code often contains parameter verification, data reading and storage, business calculation, calling external services, sending messages and other logic. In this case, although it is written in the same method, in real code, it is often divided into multiple submethods, but the actual effect is the same, and in our daily work, most of the code is more or less close to this structure. In Martin Fowler’s BOOK P of EAA, this common code style is called Transaction Script. Although this script-like writing method has no functional problems, in the long run, it has the following major problems: poor maintainability, poor scalability, and poor testability.

Problem 1- Poor maintainable performance

The biggest cost of an application is usually not the development phase, but the total maintenance cost of the entire application life cycle, so the maintainability of the code represents the ultimate cost.

Maintainability = How much code needs to change when dependencies change

Referring to the case code above, the code for the transaction script class is difficult to maintain because of the following:

Data structure instability: The AccountDO class is a pure data structure that maps a table in a database. The problem here is that the table structure and design of the database are external dependencies of the application and may change in the long run, for example, the database needs to do Sharding, or change the table design, or change the field name.

AccountMapper relies on the implementation of MyBatis, which may be used differently if MyBatis is upgraded in the future (see iBatis migration costs for upgrading to Annotation-based MyBatis). Similarly, if you change ORM architecture in the future, the migration cost is also huge.

3. Uncertainty of dependence on third-party services: Third-party services, such as Yahoo’s exchange rate service, are likely to change in the future: API signature changes, or the service is unavailable, and alternative services need to be found. In these cases, retrofit and migration costs are significant. At the same time, external dependence on the bottom, limiting current, circuit breaker and other schemes need to be changed.

4, third party service API changes: YahooForexService. GetExchangeRate returns as a result of the decimal point or percentage? Is the input parameter (source, target) or (target, source)? Who can guarantee that interfaces won’t change in the future? If it is changed, the core amount calculation logic must be changed, otherwise it will cause capital loss.

5. Middleware replacement: Today we use Kafka to send messages. Tomorrow, what should we do if we want to use RocketMQ in Alibaba Cloud? What happens the day after tomorrow if the message serialization is changed from String to Binary? What if I need message sharding?

We found that the code in the case had a significant impact on any changes to external dependencies. If you have a large number of applications of this kind of code, you every day will be basically various libraries upgrade, relying on service upgrades, middleware, jars filled with conflict, and finally the application into a dare not upgrade, did not dare to deploy, did not dare to write new features, and are subject to the outbreak of the bomb, one day will give you a surprise.

Problem 2- Poor scalability

The second major drawback of transactional scripting code is that while it is very efficient and simple to write code for a single use case, it becomes progressively less scalable as more use cases are added.

Extensibility = how much code needs to be added/modified to make new requirements or logic changes

Referring to the above code, if you needed to add an inter-bank transfer capability today, you would find that it would basically have to be redeveloped, with almost no reusability:

1, the data source is fixed, the data format is not compatible: original AccountDO from access to the local, and an inter-bank transfer data may need to be obtained from a third party service, and the data format between service is unlikely to be compatible, resulting in abnormal from data validation, data read and write, to processing, such as amount calculation logic should be rewritten.

2. Business logic cannot be reused: Incompatible data formats cause core business logic cannot be reused. The consequence of having special logic for each use case is that you end up with a lot of if-else statements, and this branching logic makes parsing code very difficult, making it easy to miss boundary cases and cause bugs.

3. Interdependencies between logic and datastore: As the business logic grows more complex, it is likely that the new logic will require changes to the database schema or message format. Changing the format of the data causes other logic to move along with it. In the most extreme scenarios, the addition of a new feature can lead to the refactoring of all existing features at great cost.

In a transactional scripting architecture, the first requirement is usually very fast, but the time required to do the NTH requirement is likely to increase exponentially, with most of the time spent refactoring and compatibility of old features, and eventually your innovation rate will drop to zero, prompting the old application to be overthrown and refactoring.

Problem 3- Poor testability performance

In addition to the high test coverage of some tool classes, framework classes and middleware classes, it is hard to see good test coverage of business codes in our daily work, and most of the pre-launch tests are human “integration tests”. Low test rates make it hard to control code quality, easy to miss boundary conditions, and passive detection of exception cases only when they break out online. The main reason for low test coverage is the poor testability of the business code.

Testability = time spent running each test case * number of additional test cases per requirement

Refer to the code above, which has very low testability:

1. Difficulty in setting up facilities: When the code is heavily dependent on external dependencies such as database, third-party services, middleware, etc., to complete a test case needs to ensure that all dependencies can run, which is extremely difficult in the early stage of the project. In the later stage of the project, the test will fail due to the instability of various systems.

2. Long run time: Most external dependency calls are I/O intensive, such as cross-network calls, disk calls, etc., and such I/O calls take a long time to test. Another common reliance is on cumbersome frameworks such as Spring, which often takes a long time to start. When a test case takes more than 10 seconds to run, most development doesn’t test very often.

3. High coupling degree: if there are three sub-steps A, B and C in A script, and each step has N possible states, when the coupling degree of multiple sub-steps is high, N * N * N test cases are required at most in order to completely cover all use cases. As more substeps are coupled, the number of test cases required grows exponentially.

In transactional scripting, when the complexity of test cases is much greater than the complexity of real code, and when running test cases takes more time than human testing, most people will choose not to write full test coverage, which is often why bugs are hard to find early.

Summary analysis

Let’s re-analyze why the above problems occur. Because the above code violates at least the following principles of software design:

1. Single Responsibility Principle: The Single Responsibility Principle requires that an object/class should have only one reason for change. But in this case, the code could change because of any change in external dependencies or computational logic.

2. Dependency Inversion Principle: The Dependency Inversion Principle requires that you rely on abstractions in your code, not concrete implementations. In this case, the external dependencies are implementation-specific. For example, YahooForexService is an interface class, but it relies on a specific service provided by Yahoo, so it is implementation-dependent. The same KafkaTemplate, MyBatis DAO implementation are concrete implementation.

3, The Open Closed Principle: The Open Closed Principle means that extensions are Open, but modifications are Closed. The calculation in this case is code that could be modified, and in this case the logic would need to be wrapped as an unmodifiable calculation class, with the new functionality implemented through an extension of the calculation class.

We need to refactor the code to solve these problems.

Reconstruction scheme

Before refactoring, let’s draw a flowchart that describes each step of the current code:

This is a traditional three-tiered structure: UI layer, business layer, and infrastructure layer. The upper layer is directly dependent on the lower layer, resulting in high coupling degree. In the business layer, there is strong dependence on the infrastructure of the lower layer and high coupling degree. We need to abstract and tidy up each node on this diagram to decouple external dependencies.

Abstract data storage layer

The first common step is to abstract the Data Access layer to reduce the system’s direct dependence on the database. Specific methods are as follows:

1. Create an Account Entity object: An Entity is a domain object with an ID that has both data and behavior. Entity is independent of the database storage format and is designed according to the Ubiquitous Language of the field.

AccountRepository: Repository stores and reads Entity objects, while Repository implements database storage. By adding the Repository interface, the underlying database connections can be replaced with different implementation classes.

The specific simple code implementation is as follows:

Account entity class:

@Data public class Account { private AccountId id; private AccountNumber accountNumber; private UserId userId; private Money available; private Money dailyLimit; Public void deposit(Money Money) {// deposit(Money Money) {// deposit(Money Money)}}Copy the code

And AccountRepository and MyBatis implementation classes:

public interface AccountRepository {
    Account find(AccountId id);
    Account find(AccountNumber accountNumber);
    Account find(UserId userId);
    Account save(Account account);
}

public class AccountRepositoryImpl implements AccountRepository {

    @Autowired
    private AccountMapper accountDAO;

    @Autowired
    private AccountBuilder accountBuilder;

    @Override
    public Account find(AccountId id) {
        AccountDO accountDO = accountDAO.selectById(id.getValue());
        return accountBuilder.toAccount(accountDO);
    }

    @Override
    public Account find(AccountNumber accountNumber) {
        AccountDO accountDO = accountDAO.selectByAccountNumber(accountNumber.getValue());
        return accountBuilder.toAccount(accountDO);
    }

    @Override
    public Account find(UserId userId) {
        AccountDO accountDO = accountDAO.selectByUserId(userId.getId());
        return accountBuilder.toAccount(accountDO);
    }

    @Override
    public Account save(Account account) {
        AccountDO accountDO = accountBuilder.fromAccount(account);
        if (accountDO.getId() == null) {
            accountDAO.insert(accountDO);
        } else {
            accountDAO.update(accountDO);
        }
        returnaccountBuilder.toAccount(accountDO); }}Copy the code

The Account entity class is compared to the AccountDO data class as follows:

AccountDO is a simple mapping to a database table. Each field corresponds to a column of a database table. This Object is called a Data Object. DO has data, no behavior. AccountDO’s purpose is to quickly map databases without writing SQL directly into code. Regardless of whether you are using ORM like MyBatis or Hibernate, the database should map directly to DO first, but the code should avoid doing DO directly at all.

Account is a domain-logic-based Entity class whose fields do not necessarily have to be related to the database store. Entity contains data, but it should also contain behavior. In Account, fields are not just basic types such as String, but should be replaced as far as possible with Domain Primitive to avoid a lot of verification code.

The DAO and Repository classes are compared as follows:

DAO corresponds to a specific database type operation, which is equivalent to the encapsulation of SQL. The objects of all operations are DO classes, and all interfaces can be changed depending on the database implementation. For example, insert and UPDATE are database-specific operations.

2. Repository corresponds to the abstraction of Entity object reading and storage, which is unified at the interface level and does not pay attention to the underlying implementation. For example, save an Entity object, but whether it is an Insert or an Update doesn’t matter. The Repository implementation class calls daOs to perform operations and converts AccountDO to Account through Builder/Factory objects

The Repository and the Entity

1. Through the Account object, it avoids the direct coupling between other business logic codes and the database, and avoids the problem that when the database field changes, a large number of business logic also changes.

2. Repository changes the way business code is thought of, so that business logic is not programmed against a database, but against a domain model.

An Account belongs to a complete in-memory object, and it is relatively easy to do full test coverage, including its behavior.

Repository is an interface class that makes it easy to Mock or Stub, and can be easily tested.

The AccountRepositoryImpl implementation class, with its responsibilities unitary, is relatively easy to test by focusing only on the account-to-AccountDo mapping and the Repository method-to-DAO mapping.

Abstract third-party services

Similar to database abstraction, all third-party services need to be abstracted to solve the problem of uncontrollable third-party services and strong coupling of input and output parameters. In this example we abstract the ExchangeRateService service and an Exchangererate Domain Primitive class:

public interface ExchangeRateService {
    ExchangeRate getExchangeRate(Currency source, Currency target);
}

public class ExchangeRateServiceImpl implements ExchangeRateService {

    @Autowired
    private YahooForexService yahooForexService;

    @Override
    public ExchangeRate getExchangeRate(Currency source, Currency target) {
        if (source.equals(target)) {
            return new ExchangeRate(BigDecimal.ONE, source, target);
        }
        BigDecimal forex = yahooForexService.getExchangeRate(source.getValue(), target.getValue());
        return new ExchangeRate(forex, source, target);
    }
Copy the code

Anti-corrosion coating (ACL)

This common design pattern is called the anti-corruption Layer or ACL. Most of the time our system will rely on other systems, and the dependent system may contain unreasonable data structure, API, protocol or technical implementation, if the strong dependence on external systems, will lead to the “corrosion” of our system. At this point, by adding an anticorrosion layer between the systems, it can effectively separate the external dependencies from the internal logic, so that no matter how the external changes, the internal code can remain unchanged as much as possible.

Acls are more than just a layer of calls. In practice, ACLs can provide more powerful functions:

1. Adapter: In many cases, external data, interfaces and protocols do not conform to internal specifications. Through the adapter mode, the data conversion logic can be encapsulated inside the ACL to reduce the intrusion into business code. In this case, we convert each other’s incoming and outgoing parameters by encapsulating ExchangeRate and Currency objects to make the incoming and outgoing parameters more appropriate to our standards.

2. Cache: For external dependencies that are frequently invoked and data changes are infrequent, caching logic can be embedded in the ACL to effectively reduce the request pressure for external dependencies. At the same time, caching logic is often written in business code, so embedding caching logic in ACLs can reduce the complexity of business code.

3. Bottom pocket: If the stability of external dependencies is poor, an effective strategy to improve the stability of our system is to use ACL to play the role of bottom pocket, for example, when the external dependencies fail, return the latest successful cache or business bottom data. This back-pocket logic is generally complex and difficult to maintain if scattered in the core business code. By concentrating in acLs, it is easier to test and modify.

4. Easy to test: Like the previous Repository, the ACL interface class can be easily Mock or Stub for unit testing.

5. Function switch: Sometimes we want to enable or disable the function of an interface in certain scenarios, or make an interface return a specific value. We can configure function switch in ACL to achieve this without affecting the real business code. Also, using functional switches makes it easy to implement Monkey tests without actually physically turning off external dependencies.

Abstract middleware

Similar to the abstraction of third party services in 2.2, the purpose of the abstraction of various middleware is to make business code not dependent on the implementation logic of the middleware. Because middleware usually needs to be generic, middleware interfaces are usually String or Byte[], resulting in serialization/deserialization logic often mixed with business logic, resulting in glue code. Reduce repetitive glue code through ACL abstraction of middleware.

In this case, we isolate the underlying Kafka implementation by encapsulating an abstract AuditMessageProducer and AuditMessage DP objects:

@Value
@AllArgsConstructor
public class AuditMessage {

    private UserId userId;
    private AccountNumber source;
    private AccountNumber target;
    private Money money;
    private Date date;

    public String serialize() {
        return userId + "," + source + "," + target + "," + money + "," + date;   
    }

    public static AuditMessage deserialize(String value) {
        // todo
        return null;
    }
}

public interface AuditMessageProducer {
    SendResult send(AuditMessage message);
}

public class AuditMessageProducerImpl implements AuditMessageProducer {

    private static final String TOPIC_AUDIT_LOG = "TOPIC_AUDIT_LOG";

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    @Override
    public SendResult send(AuditMessage message) {
        String messageBody = message.serialize();
        kafkaTemplate.send(TOPIC_AUDIT_LOG, messageBody);
        returnSendResult.success(); }}Copy the code

The specific analysis is similar to 2.2 and is skipped here.

Encapsulating business logic

In this case, there is a lot of business logic mixed with external dependencies, including amount calculation, account balance verification, transfer limits, amount increase or decrease, and so on. This logic confusion causes the core computing logic to be unable to be effectively tested and reused. Our solution here is to encapsulate all business logic with Entity, Domain Primitive, and Domain Service:

Encapsulate stateless computing logic independent of entities with Domain Primitive

In this case, use ExchangeRate to encapsulate the ExchangeRate calculation logic:

BigDecimal exchangeRate = BigDecimal.ONE;
if (sourceAccountDO.getCurrency().equals(targetCurrency)) {
    exchangeRate = yahooForex.getExchangeRate(sourceAccountDO.getCurrency(), targetCurrency);
}
BigDecimal sourceAmount = targetAmount.divide(exchangeRate, RoundingMode.DOWN); To: ExchangeRate ExchangeRate = exchangeRateService. GetExchangeRate (sourceAccount.getCurrency(), targetMoney.getCurrency());
Money sourceMoney = exchangeRate.exchangeTo(targetMoney);
Copy the code

Entity encapsulates the stateful behavior of a single object, including business validation

The Account entity class encapsulates all Account behaviors, including service verification, as follows:

@Data
public class Account {

    private AccountId id;
    private AccountNumber accountNumber;
    private UserId userId;
    private Money available;
    private Money dailyLimit;

    public Currency getCurrency() {
        returnthis.available.getCurrency(); } // enter public void deposit(Money Money) {if(! this.getCurrency().equals(money.getCurrency())) { throw new InvalidCurrencyException(); } this.available = this.available.add(money); } public void withdraw(Money Money) {if (this.available.compareTo(money) < 0) {
            throw new InsufficientFundsException();
        }
        if(this.dailyLimit.compareTo(money) < 0) { throw new DailyLimitExceededException(); } this.available = this.available.subtract(money); }}Copy the code

The original business code can be simplified as:

sourceAccount.deposit(sourceMoney);
targetAccount.withdraw(targetMoney);
Copy the code

Encapsulate multi-object logic with Domain Services

In this case, we found that the two accounts are actually one and the same, which means that the behavior should be encapsulated in one object. Especially considering that this logic may change in the future: for example, adding a deduction logic. This is not appropriate in the original TransferService, nor in any Entity or Domain Primitive. A new class is needed to contain the behavior of the cross-domain object. This object is called a Domain Service.

Create an AccountTransferService class:

public interface AccountTransferService {
    void transfer(Account sourceAccount, Account targetAccount, Money targetMoney, ExchangeRate exchangeRate);
}

public class AccountTransferServiceImpl implements AccountTransferService {
    private ExchangeRateService exchangeRateService;

    @Override
    public void transfer(Account sourceAccount, Account targetAccount, Money targetMoney, ExchangeRate exchangeRate) {
        Money sourceMoney = exchangeRate.exchangeTo(targetMoney);
        sourceAccount.deposit(sourceMoney); targetAccount.withdraw(targetMoney); }}Copy the code

The original code is reduced to one line:

accountTransferService.transfer(sourceAccount, targetAccount, targetMoney, exchangeRate);
Copy the code

Analysis of results after reconstruction

The refactored code for this example looks like this:

public class TransferServiceImplNew implements TransferService {

    private AccountRepository accountRepository;
    private AuditMessageProducer auditMessageProducer;
    private ExchangeRateService exchangeRateService;
    private AccountTransferService accountTransferService;

    @Override
    public Result<Boolean> transfer(Long sourceUserId, String targetAccountNumber, BigDecimal targetAmount, String targetCurrency) {// Check Money targetMoney = new Money(targetAmount, new Currency(targetCurrency)); // Read the data AccountsourceAccount = accountRepository.find(new UserId(sourceUserId));
        Account targetAccount = accountRepository.find(new AccountNumber(targetAccountNumber));
        ExchangeRate exchangeRate = exchangeRateService.getExchangeRate(sourceAccount.getCurrency(), targetMoney.getCurrency()); / / business logic accountTransferService. Transfer (sourceAccount, targetAccount, targetMoney, exchangeRate); // Save data accountrepository.save (sourceAccount); accountRepository.save(targetAccount); AuditMessage = new AuditMessage(sourceAccount, targetAccount, targetMoney);
        auditMessageProducer.send(message);

        return Result.success(true); }}Copy the code

As you can see, the refactored code has the following characteristics:

1. The service logic is clear and the data storage is separated from the service logic.

Entity, Domain Primitive, Domain Service are all independent objects without any external dependencies, but they contain all core business logic and can be tested individually and completely.

3. The original TransferService no longer includes any computing logic, but is only choreographed as a component, and all logic is delegate to other components. These services which contain only Orchestration are called Application Services.

We can redraw a picture with the new structure:

Then, after rearrangement, the graph becomes:

We can find that by abstracting the external dependencies and encapsulating the internal logic, the application’s overall dependencies change:

1. The bottom layer is no longer the database but Entity, Domain Primitive and Domain Service. These objects do not depend on any external services or frameworks, but are pure in-memory data and operations. These objects are packaged as Domain Layer. The domain layer does not have any external dependencies.

2. Application Services are responsible for component choreography, but these services rely only on abstracted ACL and Repository classes, whose implementation classes are injected through dependency injection. Application Service, Repository, ACL, etc are collectively called Application Layer. The application layer depends on the domain layer, but not on the implementation.

3. Finally, the concrete implementation of ACL, Repository, etc. These implementations usually rely on external concrete technical implementation and framework, so they are collectively called Infrastructure Layer. Objects in Web frameworks such as Controllers are also typically in the infrastructure layer.

If we could rewrite this code today, we would probably write the business logic at the Domain layer first, then the component choreography at the Application layer, and then the concrete implementation of each external dependency, given the final dependencies. This architectural approach and code organization is called domain-driven Design (domain-driven Design, or DDD). So DDD is not a specific architectural design, but rather the end of all Transction Script code that must be properly refactored.

DDD’s hexagonal architecture

In our traditional code, we tend to pay attention to the implementation details and specifications of each external dependency, but today we need to dare to throw that concept away and rethink the code structure. In the refactoring above, if we discard all the implementation details of Repository, ACL, Producer, etc., we find that each external abstract class is actually an input or output, similar to the I/O node in a computer system. This is also true in the CQRS architecture, where all interfaces are divided into Command and Query. The internal logic other than I/O is the core logic of application services. Based on this basis, Alistair Cockburn proposed the Hexagonal Architecture in 2005, also known as Ports and Adapters Architecture.

In this picture:

1. I/O is implemented in the outermost layer of the model

2. Each I/O adapter is in a gray area

3. Each Hex edge is a port

4. At the center of Hex is the core domain model of the application

In Hex, for the first time, the organization of the architecture becomes a two-dimensional relationship between inside and outside, rather than the traditional one-dimensional relationship between up and down. In the Hex architecture, we found for the first time that the UI layer, DB layer, and various middleware layers are virtually indistinguishable from each other, just data input and output, rather than the top and bottom layers in the traditional architecture.

In addition to Hex (2005), Jeffery Palermo’s Onion Architecture (2008) and Robert Martin’s Clean Architecture (2017) are very similar ideas. Apart from different names and pointcuts, the entire architecture is based on a two-dimensional internal and external relationship. This also shows that ddD-based architectures end up looking similar. Herberto Graca has a very comprehensive diagram that covers most of the port classes in the real world.

Code organization

In order to organize the code structure effectively and avoid the dependency of the lower level code on the upper level implementation, we can handle the relationship between POM Modules and POM dependencies in Java. A Spring/SpringBoot container solves the problem of dynamically injecting implementation-specific dependencies at runtime. A simple dependency diagram looks like this:

Types module

The Types module is where Domain Primitives can be exposed. Domain Primitives can be exposed as stateless logic, so they are often included in the external API interface and need to be a separate module. The Types module does not rely on any libraries and is pure POJOs.

Domain module

Domain module is the centralized place of core business logic, including stateful Entity, Domain Service, and various external dependent interface classes (such as Repository, ACL, middleware, etc.). The Domain module relies only on the Types module, which is also pure POJO.

Application module

The Application module mainly contains the Application Service and some related classes. The Application module depends on the Domain module. Again, no framework, pure POJO.

Infrastructure module

The Infrastructure module contains Persistence, Messaging, External, and other modules. For example, the Persistence module contains the implementation of the database DAO, including Data Object, ORM Mapper, Entity to DO conversion classes, etc. The Persistence module relies on specific ORM class libraries, such as MyBatis. If you need to use the annotation scheme provided by Spring-Mybatis, you need to rely on Spring.

The Web module

The Web module contains Controller and other related code. If you use SpringMVC, you need to rely on Spring.

Start the module

The Start module is the boot class for SpringBoot.

test

1. Types and Domain modules all belong to pure POJOs without external dependencies and can be 100% covered by unit tests basically.

2. The Application module’s code relies on external abstract classes and needs to Mock all external dependencies through the testing framework, but it can still be 100% unit tested.

3. The code of each module of Infrastructure is relatively independent and the number of interfaces is relatively small, so it is relatively easy to write a single test. However, due to the dependence on external I/O, the speed is not very fast, but fortunately, module changes are not very frequent, belong to once and for all.

4. There are two ways to test Web modules: through Spring’s MockMVC test, or through HttpClient call interface tests. But it’s best to Mock out all the service classes that Controller depends on when testing. Generally speaking, the Controller logic becomes extremely simple when you put all the Controller logic behind the Application Service, and it is easy to cover 100%.

5. Start module: The integration tests for common applications are written in Start. When the unit tests of other modules are 100% covered, the integration test is used to verify the authenticity of the whole link.

Rate of code evolution/change

In traditional architectures, code changes from top to bottom at roughly the same rate, changing requirements requires sweeping changes from interface to business logic to database, and third-party changes can result in entire code rewrites. However, different modules of code evolve at different rates in DDD:

The Domain layer belongs to the core business logic and is often modified. For instance: do not need to buckle poundage originally, need now and so on. Entity can solve the logic change based on a single object, and Domain Service can solve the business logic change between multiple objects.

2. The Application layer belongs to Use Cases. Business use cases generally describe the general direction of the requirements, the interface is relatively stable, especially the external interface is not frequently changed. Adding a Service use case can be extended by adding an Application Service or interface.

The Infrastructure layer is among the least frequent changes. Modules in this layer are typically upgraded only when external dependencies change, which is usually much less frequent than business logic.

Therefore, in DDD architecture, it is obvious that the outer layer of code is more stable, and the inner layer of code evolves faster, which truly embodies the core idea of domain “driven”.

conclusion

DDD is not a special architecture, but the ultimate destination of any traditional code that has been properly refactored. The DDD architecture can effectively solve the problems of traditional architecture:

1. High maintainability: when external dependence changes, internal code only changes the module connected with external, and other business logic remains unchanged.

2, high scalability: when doing new functions, most of the code can be reused, only need to increase the core business logic.

3, high testability: each split module is in line with the principle of simplicity, most do not depend on the framework, can quickly unit test, achieve 100% coverage.

4. Clear code structure: POM Module can solve the dependency between modules, and all external modules can be reused independently as Jar packages. Once the team has formed a specification, it can quickly locate the relevant code.