Sorry, I have deleted the previous several DDD articles. This article is a summary of the previous several DDD articles and has corrected some wrong views.

Domain Driven Design (DDD) is a business domain modeling methodology and business architecture design methodology. In the strategic design stage, domain boundaries are divided from the perspective of business domain, and domain models are established by abstracting business. In the tactical design stage, the architecture is designed and developed according to the clear domain boundary and domain model.

DDD solves core complex business design problems, simplifies the implementation of business systems, makes business logic highly cohesive, decouples from infrastructure and framework, and resolves the separation of microservices with clear domain boundaries. Loose coupling between aggregations maintains high cohesion by aggregating root ID references to domain events. Loose coupling keeps project code clean as business requirements evolve over time.

In this paper, I will share with you my current understanding of DDD, as well as my reflection and summary of the problems encountered in the process of DDD based on a recent project practice. I only have personal experience, but I prefer tactical design.

Understand DDD core concepts from project practice

Domain usually refers to the scope of business, and each company has its own clear scope of business. Usually, each company has many internal systems. For example, an e-commerce company may have logistics system, e-commerce system, live broadcasting system, etc., and each system is in a more subdivided field.

The project of Jasmine Red Stock Exchange (Red Inspector) is the first project I did after I joined Jasmine Digital Group. It is also a new project. Since there is no historical burden, I built the whole project from zero, so I chose to try DDD in this project.

The business of this project is OTO (online to offline) store exploration, and OTO store exploration is the field of this project.

In fact, tadian.com is a content marketing mode in which merchants pay to place orders online, the platform matches the merchants with experts, and the experts explore online content promotion of the stores offline, which can be to taste delicious food or visit scenic spots without ticket, etc., and the experts finally promote the merchants through short videos, live broadcast, graphic content and other ways. So whether exploring gourmet shops, exploring amusement parks, exploring shops are the core of this field.

In the field of shop exploration, the core business terms include: shops, masters, shops, orders and tasks, while the core events include: shops entering, shops issuing orders, masters receiving orders, etc.

Attached is the architectural design drawing of the corresponding version of this project:

Tip: The middle section does not correspond to a microservice for each bounding context, but multiple bounding contexts can be combined in one microservice. Can you refer to my other article on whyWhy is the project structured this way?.

The bounded context is the boundary of business concepts and the division of the smallest granularity of business problems. In OTO exploration business, there are multiple bounded contexts. We decouple the system by identifying these defined bounded contexts, requiring each bounded context to be tightly organized internally, with clear responsibilities and high cohesion.

The bounded context we partition is shown below.

Tip: Why split the task and order into different bounded contexts (the task is not an entity as the aggregation root of the order, but as the aggregation root of a separate aggregation)? This is because an order issued by a merchant can be received by multiple different masters, and a master can also receive orders from different merchants, which is not a simple one-to-many relationship. This is more like the relationship between goods and orders than between orders and order items.

After dividing the bounding context, the problem subdomain needs to be identified according to the bounding context. A problem subdomain is a division of a business problem, and a larger-grained division of a business problem than a bounded context.

  • Core (sub-) domain: the core competitiveness and profit source of the product;
  • 1. Common, common to different domains, and available for purchase;
  • Support sub-domain: non-core domain, non-general domain, with personalized requirements, used to support the operation of the core domain;

According to the bounding context, we divide the subdomains as shown in the figure below.

  • OTO shop core domain: business orders, platform review orders, master orders, platform review tasks, master backfill content links, etc.
  • Store support sub-domain: store entry, platform audit store, store binding store, store transfer, etc.
  • .

After subdomains are demarcated, we need to model the domain.

Domain modeling encapsulates and hosts all business logic by abstracting the business into aggregation, entity, aggregation root, and value object models, maintaining high cohesion and low coupling of the business.

Aggregation: Encapsulates business logic, condenses decision commands and domain events, and accommodates entities, aggregate roots, and value objects.

  • Aggregate root: Also an entity that is the root node of an aggregate, such as an order;
  • Entity: The backbone of an aggregation, with a unique identity and lifecycle, such as an order Item;
  • Value object: An additional business concept for an entity that describes the business information that the entity contains, such as an order pickup address.

70% of the time, there is only one entity within an aggregate, the aggregate root.

Technically, an aggregation is a package that houses domain services, factories, repositories, aggregation roots, entities, and value objects.

The classification rules of domain layer packages are as follows:

- bounded context - domain -- -- -- -- -- - polymerization A -- -- -- -- -- -- -- -- (aggregation, entity, value object, field service, resource, field events) -- -- -- -- -- - polymerization B -- -- -- -- -- -- -- -- (aggregation, entity, value object, field service, resource, field events)Copy the code

In particular, a bounding context may contain multiple aggregations, but an aggregation can only exist in one bounding context.

The above package partition is only domain layer partition, requiring aggregation root, entity, value object, domain service, resource library, domain event class to be stored under the aggregation package, whether using DDD classic four layer architecture or hexagonal architecture.

Taking the store context as an example, we sampled the implementation of hexagonal architecture, and the hierarchies and packages of the whole module were divided as follows:

Com.mgm.hjs. storeContext (Boundary Context) -- Adapters ---- API ------ WebMVC (front-end interface, requests come in from the gateway) ------ Dubbo (Internal microservice call, ----persistence ------ DAO (Mybatis Mapper class and XML file) ------ Po (corresponding database table generated class) -- -- -- -- -- - StoreAssembler. Class (domain entities turn PO converter) -- -- -- -- -- - StoreRepositoryImpl. Class (implementation layer in the field of database interface) - cache (cache adapter) - application (application layer) ----gateway (abstract interface that communicates with other bounded contexts and is coupled with the mediation layer) ----job (optional, timed task) ----representation (write only) ----usecase (application service) Avoid single class bloat) ----assembler (aggregate root to DTO converter) ---- DTO (DTO class) ---- CQE (CQE mode) ------ Command (request parameters) ---- Query (query parameters) Event parameter, note: Non-domain events) -- Domain (domain layer) ----Store (store aggregation) ------ Model (aggregate root, entity, value object) ------ Event (domain event) ------ storeDomainService.class (domain service) ------ Storerepository.class (Abstract Repository interface)Copy the code

In the hierarchical architecture pattern, we need to follow a strict rule: the top can only depend on the bottom, and the bottom cannot depend on the top.

For example, in an order creation operation, CreateOrderCommand, which receives the front end request parameters, belongs to the application layer class. Although we can use CreateOrderCommand directly in the Controller(interface layer), this belongs to the upper dependent lower layer and is not the domain layer. The internal structure of the aggregation root is not exposed and is therefore allowed.

If, conversely, the CreateOrderCommand object were passed directly to the aggregation root, the lower layer would be dependent on the upper layer, so this would not be allowed. CreateOrderCommand must disassemble the value object, or entity object, needed to create the order at the application layer, and then call the domain service method, which calls the order factory to create the order, before it is passed to the repository to persist the order aggregation root.

Within the domain layer, more accurately, in the aggregate package, storage is under the polymerization of polymerization, entity, value object, database, field service, aggregation, root factory class, due to the implementation of the repository depend on orm framework or other framework for persistence polymerization root, release need to rely on MQ implementation, etc., so the repository defined interfaces implemented by the upper, Event publishing is also defined as an interface implemented by the upper layer.

For repositories and event publishers, since domain services rely on repositories for retrieving and storing aggregate roots and event publishers for publishing events, we can use the Spring framework’s automatic injection, but constructor injection should be used instead of annotating fields. Avoid injecting repositories, NULL event publishers, and Spring framework annotations should not be added (as decoupled as possible).

Responsibilities for application services, domain services, aggregation roots, and repositories

When implementing DDD, we need to adhere to strict code specifications to keep the code clean. Otherwise, as requirements iterate, projects can easily lose their DDD look and become neither DDD nor MVC.

In DDD, a Repository is a container for the aggregate root and plays the same role as a DAO, but it only provides operations to persist the aggregate root (add or update) and query operations to get the aggregate root by ID. Of all domain objects, only the aggregation root has a Repository because, unlike the DAO, a Repository plays only the role of providing the aggregation root to the domain model.

The job of a Repository is to provide an aggregate root or persistent aggregate root and do nothing else “as much as possible” otherwise the aggregate root will degenerate into a DAO.

public interface Repository<DO.KEY> {
    void save(DO obj);
    DO findById(KEY id);
    void deleteById(KEY id);
}
Copy the code

The aggregate root and the DomainService encapsulate the implementation business logic, while the ApplicationService does not process the business logic, but simply encapsulates the DomainService/aggregate root method calls.

Normally, processing a business request will go through:

Application Services -> Domain Services -> Get aggregate root from the repository -> Persist aggregate root from the repository -> publish domain eventsCopy the code

But it is also allowed:

Apply the service -> get aggregate root from the repository -> persist aggregate root from the repository -> publish domain eventsCopy the code

For business operations that cannot be done directly through the aggregation root, domain services are required.

But the principles must be observed:

  • The aggregate root cannot operate on other aggregate roots directly, and can only be referenced with the aggregate root ID.
  • Domain services between aggregations within the same bounded context can be invoked directly;
  • The interaction between the two bounded contexts must be mediated by the application service layer abstraction interface -> adaptation layer.

The aggregation root factory is responsible for the creation of the aggregation root, but it is not required. You only need to write the creation of the aggregation root to the aggregation root and change it to a static method.

Taking modifying user information as an example, you can obtain the user aggregation root from the resource library in the application service, invoke the method of modifying user information from the user aggregation root, and persist the user aggregation root through the resource library.

public class UserModifyInfoUseCase{

    /** * Update user information **@param command
     * @param token
     */
    @Transactional(rollbackFor = Throwable.class, isolation = Isolation.READ_COMMITTED)
    public void updateUserInfo(ModifyUserInfoCommand command, Long loginUserId) {
        // Get the aggregate root
        Account account = findByAccountId(loginUserId);
        // Call the business method
        account.modifyAccountInfo(AccountInfoValobj.builder()
                .nickname(command.getNickname())
                .avatarUrl(command.getAvatarUrl())
                .country(command.getCountry())
                .province(command.getProvince())
                .city(command.getCity())
                .gender(Sex.valueBy(command.getGender()))
                .build());
        // Persist through the repository
        repository.save(account);
        // Update the cacheaccountCache.cache(loginUserId, getUserById(account.getId())); }}Copy the code

In this case, the user aggregation root can see their own information, and users can modify their own information directly through the aggregation root, so we do not need domain services in this scenario.

Complex scenarios such as user binding to mobile phone numbers cannot be completed directly in domain services.

The procedure for binding a mobile phone number is as follows: Obtain the SMS verification code, verify the SMS verification code, and verify whether the mobile phone number has been bound to another account.

Which obtain message authentication code and checking message authentication code should be completed in the application service, and check whether the mobile phone number has been binding other account will need to be completed by field service, because the polymerization was unable to complete this judgment, polymerization root can’t see the other accounts, polymerization root cannot have repository, and application service cannot handle the business logic.

Aggregate root

public class Account extends BaseAggregate<AccountEvent>{
    / /...
    private String phone;

    public void bindMobilePhone(String phoneNumber) {
        if(! StringUtils.isEmpty(this.phone)) {
            throw new AccountParamException("The mobile phone number has been bound. If you need to update it, you can go through the process of changing the mobile phone number.");
        }
        this.phone = phoneNumber; }}Copy the code

Field service

@Service
public class AccountDomainService {
    
    private AccountRepository repository;

    public AccountDomainService(AccountRepository repository) {
        this.repository = repository;
    }

    public void bindMobilePhone(Long userId, String phone) {
        Account account = repository.findById(userId);
        if (account == null) {
            throw new AccountNotFoundException(userId);
        }
        // The number is bound to another account
        booleanexist = repository.findByPhone(phone) ! =null;
        if (exist) {
            throw newAccountBindPhoneException(phone); } account.bindMobilePhone(phone); repository.save(account); }}Copy the code

Application service

@Service
public class UserBindPhoneUseCase {
    
       /** * Bind mobile phone number - send verification code **@param command
         * @param token
         */
        public void bindMobilePhoneSendVerifyCode(VerifyCodeSendCommand command, Long loginUserId) {
            // Generate a verification code
            String verifyCode = ValidCodeUtils.generateNumberValidCode(4);
            // Cache verification code
            verifyCodeCache.save(command.getPhone(),verifyCode,timeout);
            // Invoke the message service to send the verification code
            messageClientGateway.sendSmsVerifyCode(command.getPhone(), verifyCode);
        }
    
        /** * Bind mobile phone number - submit bind **@param command
         * @param token
         */
        public void bindMobilePhone(BindPhoneCommand command, Long userId) {
            // Verify the verification code
            String verifyCode = verifyCodeCache.get(command.getPhone());
            if(! command.getVerifyCode().equalsIgnoreCase(verifyCode)) {throw new VerifyPhoneCodeApplicationException();
            }
            // Bind mobile phone numbers through domain services
            accountDomainService.bindMobilePhone(userId, command.getPhone());
            // Update account cacheaccountCache.cache(userId, getUserById(userId)); }}Copy the code

The interface layer

@RestController
@RequestMapping("account/bindMobilePhone")
public class UserBindPhoneController {
 
    @Resource
    private UserBindPhoneUseCase useCase;

    @apiOperation (" Bind phone number - get verification code ")
    @GetMapping("/verifyCode")
    public Response<Void> bindMobilePhone(@RequestParam("phone") String phone) {
        Long userId = WebUtils.getLoginUserId();
        useCase.bindMobilePhoneSendVerifyCode(phone, userId);
        return Response.success();
    }

    @apiOperation (" Bind phone number - submit bind ")
    @PostMapping("/submit")
    public Response<Void> bindMobilePhone(@RequestBody @Validated BindPhoneCommand command) {
        Long userId = WebUtils.getLoginUserId();
        useCase.bindMobilePhone(command, userId);
        returnResponse.success(); }}Copy the code

CQE mode

CQE stands for Command, Query, and Event. The receiving front end uses Command to create order requests, Query to receive front end paging Query requests, and Event to consume events (non-domain events).

With the exception of Event, all write requests should receive arguments using Command, and all queries should receive arguments using Query, which can be omitted only in the case of a Query with a single parameter ID.

In the case of Query separation, Query is passed directly to the DAO (interface layer -> application layer ->DAO). Therefore, encapsulating Query conditions with Query can improve method reuse, and when adding Query conditions, there is no need to add an additional parameter to the method.

CQRS mode

CQRS(Command Query resporesponsibility Segregation) enables Command Query. There is a read model and a write model in software models, and as we know from our experience writing business code, a request, either as a “command” to perform an operation, or as a “query” to return data to the caller, cannot coexist. CQRS represents “commands” and “queries” using separate object models.

CQRS reads are placed in the application layer.

Shared Storage – Shared Model -CQRS

A shared store stores data in the same table structure, and a shared model reads data from a database using an aggregate root.

For example, to query the details of an Order, the aggregation root of the Order is Order.

// Order aggregation root
public class Order extends BaseAggregate {}Copy the code

At the application layer OrderDetailsUseCase queries the order aggregation root through OrderRepository and calls the assembler to convert the aggregation root to the read model.

public class OrderDetailsUseCase {
    public OrderDto byId(String id) {
        Order order = orderRepository.byId(id);
        returnorderDaoAssembler.toDto(order); }}Copy the code
  • The OrderDaoAssembler method turns the Order into a read model entity, that is, a DO into a DTO.

Note: Do not write read and write operations in the same application service to avoid coupling, and the application service should split multiple classes based on use cases to avoid overcrowding.

Shared Storage – Read/write separation model -CQRS

Separation model refers to the Shared storage -, speaking, reading and writing, speaking, reading and writing or operate the same table, just write the model and read, written by aggregating root, root and reading model to bypass the polymerization, the Repository, direct operation database, at this time of reading model is used to load data from the database queries, and no longer need to convert can response to the caller, The read model here is the DTO.

For queries within a single aggregate root, the shared storage-Read/Write separation model can be used to cope with complex query scenarios and improve performance.

For queries that span multiple aggregate roots, the Shared storage-shared model cannot meet this requirement. After querying multiple aggregate roots separately and merging the query results, the simple task becomes complicated and the performance deteriorates. Therefore, the shared storage-read/write separation model is more necessary.

For querying order details, you want to bring the product information. If the goods and the order are in the same service and the same database, then you can use join multi-table query.

Shared storage-Read/write separation model-CQRS

The interface layer

@RequestMapping("/order")
@RestController
public class OrderQueryController {
    
    @GetMapping("/query")
    public Response<PageInfo<OrderQueryDto>> queryOrder(OrderQuery query) {
        returnResponse.success(orderQueryUseCase.queryOrder(query,WebUtils.getLoginUserId())); }}Copy the code

The application layer

@Service
public class OrderQueryUseCase implements Cqrs {

    public PageInfo<OrderQueryDto> queryOrder(OrderQuery query, Long loginUserId) {
        Long merchantId = merchantGateway.getMerchantId(loginUserId);
        IPage<OrderQueryDto> orderPage = new Page<>(query.getPage(), query.getPageSize());
        List<OrderQueryDto> orders = orderMapper.selectOrderBy(merchantId,query,orderPage);
        PageInfo<OrderQueryDto> pageInfo = new PageInfo<>(page, pageSize);
        pageInfo.setTotalCount((int) orderPage.getTotal());
        pageInfo.setList(orders);
        returnpageInfo; }}Copy the code

Read/write split storage – Read/write split model

That is, read and write operations on different databases. To query the order details, for example, want to bring goods information, if the goods is a micro service, order is a micro, and both micro service using a different database, if you want to improve performance, you need to pass additional data synchronization service, will order goods and query result merged into a new table separation (storage) or the storage to no database. Data synchronization can be implemented through Binlog+Kafka consumption of the underlying database, or through consumption domain events that affect application performance.

Note: For complex report statistics, it is recommended to use Binlog+Kafka to synchronize to a large table. To avoid coupling with services, it should be an independent data service.

Publication of domain events

There is a principle in DDD that one business use case corresponds to one transaction, and one transaction corresponds to one aggregate root, meaning that only one aggregate root can be operated on at a time.

However, in practice, a business use case often needs to modify multiple aggregation roots, and different aggregation roots may be in different bounded contexts. Introducing domain events, that is, only one aggregation root can be modified in a transaction without breaking DDD, can also achieve decoupling between bounded contexts.

In DDD, the domain layer is the concrete implementation of the business logic, and all the business code to solve the problem subdomain is highly cohesive within the bounded context and highly cohesive within the aggregation, the aggregation root, entities, and domain services.

For domain event publishing, our implementation is temporarily stored in the aggregation root and finally published in the domain service/application service, where the domain layer abstracts the event publishing interface, implemented by the adapter layer, and injected into the domain service/application service.

One reason is that the domain layer should not rely on the apis of other frameworks, and the other reason is related to the fact that domain events are created by aggregation root/domain services.

So why are domain events created by the aggregate root/domain service and not at the application layer?

Publishing domain events is of course issued at the domain layer, either by aggregation root or domain service. Businesses are highly cohesive under aggregation, and it is only within the aggregation that it is most clear when and what events should be issued. Application services are just steps to encapsulate business implementation.

Since the core implementation of business logic is within the aggregate root, and the aggregate root is an entity, it cannot be said that an event publisher is passed in every time the aggregate root is constructed. Who passes in the event publisher when the aggregate root is fetched from the repository?

Therefore, the recommended practice is to temporarily store events from the aggregate root under the aggregate root, and publish domain events in the domain service after the repository is invoked to persist the aggregate root. Of course, in the absence of domain services, it is up to the application layer to publish domain events after invoking the repository to persist the aggregate root.

Application services A business use case is only a thin wrapper around the corresponding domain service methods, that is, two domain service methods are not invoked in one application service method. This is actually one of the things we should be aware of. If this happens, the domain service method is not encapsulated well enough. So publishing domain events by domain services is allowed, but the event publisher must be abstracted as an interface, passed in the same way as the repository in the domain service’s build method.

On why we first publish events through the Spring framework and then implement publishing events to MQ in subscribers.

An event may also require consumption within the current bounded context, that is, there may be multiple bounded contexts requiring consumption, with one event corresponding to multiple consumers.

For example, the order creation event generated by the order aggregation in the order limiting context needs to consume the order creation event, which is used to construct the message notification, and then write the message to the message notification queue; Meanwhile, other bounded contexts also require consuming order creation events. Therefore, a domain event may need to be published to multiple message docking.

Publishing events to MQ through the Spring framework and then implementing them in subscribers is actually implementing the chain of responsibility pattern with the Spring framework. This is certainly not required, nor is it the norm.

However, instead of calling MQ’s API in the domain service/application service directly to publish events, publishing events to MQ should be done in the adaptation layer. And the advantage of encapsulating event publishers is that this logic does not need to be written into the application service when it is necessary to ensure that the message is delivered at least once.

The most annoying code

The most common code we wrote in the field of DDD was DO (aggregate root) to DTO (read model) and DO to PO (map to database tables) and PO-to-DO converters. Eighty percent of bugs come from these neat property copy codes, which are prone to missing fields.

So why do you need so many layers of transformation? Why not persist the aggregate root directly to the request?

First, in DDD we must first get the aggregate root, then use the aggregate root to complete the business logic, and finally persist the aggregate root through the repository.

Why is it necessary to transfer DO to PO? Is it necessary to DO this?

If we choose to persist the aggregate root in a relational database, we may need to split the aggregate root into multiple tables, and we may need to convert enumeration types to numeric types for storage. Based on these scenarios, you need to convert the aggregation root into a PO and then call the DAOs of the corresponding tables to store them in the database.

Why DO to DTO?

In addition to the fact that we must not expose the internal structure of the aggregate root to the outside world, the data required by the front end is also different, such as the need to split the enumeration type fields into value and name fields, and the need to mask some fields.

In order not to expose the internal structure of the aggregate root, the aggregate root should only expose the GET method for external retrieval of field values, while using the Builder mode to provide the Builder method to the repository (using an assembler) to convert the PO into the aggregate root. The same can be done for entities, where value objects provide only constructors and GET methods for all parameters. Of course, there’s nothing wrong with either approach.

For general conversion operations, we provide each aggregate root with an assembler (converter) that converts DO (aggregate root) to DTO, and an assembler (converter) that converts PO and DO to each other.

Based on this realization, the author also looked for ways to solve these tedious operations and improve work efficiency. We tried mapstruct framework, but Mapstruct is only suitable for simple aggregation roots, and the aggregation root mapping of complex internal structure also needs to write a lot of annotations, so the workload is not reduced but increases the difficulty of troubleshooting problems. Copying the working class using properties provided by Spring is the same and does not solve the problem.

Optimize the persistence performance of the aggregate root

In a scenario where the aggregate root is persisted using a relational database, the save method has a significant performance cost under the constraint that the aggregate root can only be persisted through Repository’s Save method because updating an aggregate root requires updating the entities under the aggregate root. In order to reduce the performance impact, we can compare the in-memory snapshot before the update, and only perform the update on the updated entity. We wrote a separate article on how to achieve this: “DDD Repository Performance Optimization”.

Now that the distributed database is mature, it is not recommended to introduce the ORM framework and distributed transaction framework into the new project, let alone to use the sub-database and sub-table, which should be left to the underlying database or add a layer of agents to complete.

conclusion

Because DDD lacks authoritative practice guidance and code constraints, we can only accumulate experience through practice. Our personal understanding is not completely correct. For the division of the limiting context, we only divide it by experience, but we lack experience, and the new business is also in a state of constant exploration. The division and modeling of the limiting context now does not mean that it will not be knocked down and started again in the future.

References:

  • Domain driven practical thinking (3) : DDD piecewise collaborative design
  • The practice of Domain-driven Design (DDD) in Meituan Dianping business System
  • Practice of domain-driven design in iQiyi reward business
  • Domain-driven Design (Thoughtworks Insights)