The author | YanHao

New retail product | alibaba tao technology

Write in front: This article and “Ali technical experts explain DDD series second bomb – application architecture” after a long time, on the one hand, I am busy with work, On the other hand, concepts such as Entity, Aggregate Root, Bounded Context and so on should be discussed before talking about Repository. However, in the actual writing process, I found that purely talking about Entity would be abstract and difficult to land. Therefore, this paper is pulled down and started from Repository. The objects that can be landed and can form norms are identified first, and then the landing Entity is attempted. This is certainly the path we can try in our daily refactoring from DDD.

Forewarned, the next article will cover the logic of the anti-corruption Layer, but you will find it very close to the idea of the Repository pattern. Entity may become less abstract after all the surrounding things have been covered. The macro concept of DDD is not hard to understand, but DDD, like REST, is just a design idea and lacks a complete set of specifications, which makes it difficult for new DDD users to get started. My previous architecture chapter mainly looks from the top-level design. From this chapter, I hope to fill in some DDD code implementation specifications to help students implement DDD ideas in their daily work. I also hope that through a set of specifications, students from different businesses can quickly understand and master each other’s code. However, the rules are dead, but people are alive. Students need to choose to implement the standards according to the actual situation of their own business. DDD standards cannot cover all scenarios, but I hope that through explanation, students can understand some thinking and trade-offs behind DDD.

Why Repository

Solid model vs. anemic model

Perhaps The first use of The term Entity in computing came from Peter Chen’s “The Entity-Relationship Model-Toward a Unified View of Data” in 1976, ER model is used to describe the relationship between entities, and it gradually evolved into a data model, which represents the way data is stored in a relational database.

However, the JPA standard in 2006, through annotations such as @Entity and the implementation of ORM framework such as Hibernate, makes a lot of Java developers’ understanding of Entity stay at the level of data mapping and ignore the behavior of Entity Entity. As a result, many models today only contain the data and attributes of Entity. All of the business logic is scattered across multiple services, Controllers, Utils utility classes, and this is what Martin Fowler calls the Anemic Domain Model.

How do you know if your model is anemic? Take a look at your code to see if it has any of the following characteristics:

1. A large number of XxxDO objects: Here DO sometimes stands for Domain Objects, but is really just a mapping of database table structures with no (or very little) business logic;

2. There are a lot of business logic in service and Controller, such as verification logic, calculation logic, format transformation logic, object relation logic, data storage logic, etc. Lots of Utils utility classes, etc.

And the flaws in the anemia model are obvious:

1. The integrity and consistency of the model object cannot be protected: because all the attributes of the object are public, only the caller can maintain the consistency of the model, which is not guaranteed; There have been cases where callers failed to maintain consistency of model data, resulting in bugs in the use of dirty data, which are particularly insidious and difficult to detect.

2. The discoverability of object operations is very poor: it is difficult to see what business logic exists, when it can be called, and what boundary can be assigned from the attributes of objects. For example, can a value of type Long be 0 or negative?

3. Code logic repetition: for example, verification logic and calculation logic are easy to appear in multiple services and code blocks, which increases maintenance cost and bug probability; A common bug is that when the anemia model changes, the verification logic fails or fails because it appears in multiple places.

4. Poor code robustness: for example, a change in a data model may result in changes to all code from top to bottom.

5. Strong dependence on low-level implementation: business codes are strongly dependent on low-level databases, network/middleware protocols, third-party services, etc., resulting in rigid core logic codes and high maintenance costs.

Although the anemia model has its flaws, 99% of the code I see in everyday code is based on the anemia model. Why? I summarize the following points:

1. Database thinking: Since the day of the database, developers have gradually shifted from “writing business logic” to “writing database logic”, which is often referred to as writing CRUD code.

2. Anemia model “Simplicity: The advantage of the anemia model is that it is” simple “, just field mapping of database tables, so it can be colluded in a uniform format from front to back. I put it in quotes because it’s simple on the face of it, but when you have model changes in the future, you’ll see that it’s not simple, and every change is a very complicated thing

Script thinking: A lot of common code is “script” or “glue code”, which is flow code. Scripting code has the advantage of being easier to understand, but less robust and more expensive to maintain over time.

But perhaps the core reason is that we actually confuse two concepts in daily development:

1. Data Model: it refers to the persistence of business Data and the relationship between Data, which is the traditional ER Model; 2. Business Model/Domain Model: refers to the linkage of related data in business logic.

Therefore, the fundamental solution to this problem is to strictly distinguish Data Model and Domain Model in the code, the specific specification will be described in detail later. In the real code structure, Data Model and Domain Model are actually in different layers. Data Model exists only in the Data layer, while Domain Model exists in the Domain layer. The key object linking the two layers is Repository.

The value of the Repository

In traditional database-driven development, we will make an encapsulation of database operations, commonly called Data Access Object (DAO). The core value of THE DAO is that it encapsulates the trivial low-level logic of concatenating SQL, maintaining database connections, and transactions, allowing business development to focus on writing code. But at heart, DAO operations are still database operations, and one of DAO’s methods is still working directly with the database and data model, but with less code. In Uncle Bob’s book, The Secrets of Clean Code, the author uses a very graphic description:

1. Hardware: Something that cannot (or is difficult) to change after it has been created. Database for development, belongs to the “hardware”, after the selection of database basically will not change, for example: using MySQL is difficult to change to MongoDB, transformation cost is too high.

2. Software: Something that can be modified at any time after it is created. For development, business code should aspire to be “software” because business processes and rules are constantly changing, and our code should be able to change as well.

Firmware: software that relies heavily on hardware. We usually have firmware in the router or android firmware and so on. Firmware abstracts the hardware, but it can only be used for a certain type of hardware. So there is no such thing as universal Android firmware today, but every phone needs its own firmware.

As we can see from the above description, databases are essentially “hardware”, DAOs are essentially “firmware”, and our own code wants to be “software”. However, firmware has a very bad feature, that is, propagation, that is, when a software is strongly dependent on firmware, due to the limitations of firmware, the software will also become difficult to change, and finally the software will become as difficult to change as the firmware. Here’s an example of how easily software can be “solidified” :

private OrderDAO orderDAO; Public Long addOrder(RequestDTO Request) {// OrderDO OrderDO = new OrderDO(); orderDAO.insertOrder(orderDO);returnorderDO.getId(); } public void updateOrder(OrderDO orderDO, RequestDTO updateRequest) { orderDO.setXXX(XXX); // omit many orderDao.updateOrder (orderDO); } public voiddoSomeBusiness(Long id) { OrderDO orderDO = orderDAO.getOrderById(id); // Omit a lot of business logic here}Copy the code

In the simple code above, the object relies on DAO, which is DB. While it may seem like nothing wrong at first glance, assuming a cache logic is to be added in the future, the code needs to look like this:

private OrderDAO orderDAO; private Cache cache; Public Long addOrder(RequestDTO Request) {// OrderDO OrderDO = new OrderDO(); orderDAO.insertOrder(orderDO); cache.put(orderDO.getId(), orderDO);returnorderDO.getId(); } public void updateOrder(OrderDO orderDO, RequestDTO updateRequest) { orderDO.setXXX(XXX); // omit many orderDao.updateOrder (orderDO); cache.put(orderDO.getId(), orderDO); } public voiddoSomeBusiness(Long id) {
    OrderDO orderDO = cache.get(id);
    if(orderDO == null) { orderDO = orderDAO.getOrderById(id); } // Omit a lot of business logic here}Copy the code

At this point, you’ll notice that you need to change from 1 line of code to at least 3 lines of code everywhere you use data because the logic of the insertion changes. And when you get a lot of code, then if you forget to check the cache somewhere, or if you forget to update the cache somewhere, you might need to check the database, or at worst, the cache and the database are not consistent, causing a bug. As your code gets larger and larger, with more direct DAO calls and more places to cache, each low-level change gets harder and more bug-prone. This is what happens when software is “solidified.” Therefore, we need a schema that separates our software (business logic) from the firmware/hardware (DAO, DB) to make our software more robust, and this is the core value of Repository.

Model object code specification

Object type

Entity, Data Object (DO), and Data Transfer Object (DTO) are three different models of the Repository specification:

1. Data Objects (DO, Data objects) : Actually the most common Data model in our daily work. However, in the DDD specification, DO should only be used as a mapping of the database’s physical tables and should not participate in the business logic. For simplicity, the DO field type and name should correspond to the field type and name of the database physical table, so that we DO not need to go to the database to look up a field type and name. (Of course, it doesn’t have to be exactly the same, as long as you map fields at the Mapper level)

Entity: Entity object is the business model we should use for normal business, its fields and methods should be consistent with the business language, independent of persistence. That is, Entity and DO may have completely different field names and field types, or even nested relationships. An Entity’s life cycle should only exist in memory and does not need to be serializable and persistent.

3. DTO (Transfer object) : It is mainly used as input and output parameters of Application layer. For example, Command, Query, Event, Request and Response in CQRS all belong to the category of DTO. The value of a DTO is that it ADAPTS input and output parameters to different business scenarios, avoiding the business object becoming a universal large object.

Relationships between model objects

DO, Entity, and DTO DO not necessarily have a 1:1:1 relationship in real development. Some common non-1:1 relationships are as follows:

Complex Entity splits multiple database tables: The common cause is that there are too many fields, which reduces query performance. You need to save non-retrievable and large fields in a single table to improve the retrieval efficiency of basic information tables. Common cases, such as commodity model, save large fields such as detailed description of goods separately to improve query performance:

Merge multiple associated entities into one database table: This situation usually occurs when the Aggregate root-Entity relationship is complex and requires separate databases and tables. To avoid the inconsistency caused by multiple queries and separate databases and tables, the simplicity of a single table is sacrificed to improve query and insert performance. A common example is the master sub-order model:

Extracting part of information from a complex Entity to form multiple Dtos: This is often the case where the Entity is complex but the caller only needs part of the core information to reduce the cost of information transmission by a small DTO. Also using the commodity model as an example, the basic DTO may appear in the commodity list, which does not require complex details:

Merge multiple Entities into one DTO: In order to reduce the network transmission cost and the number of server requests, serialize multiple entities and DP objects together and allow Dtos to nest other Dtos. Also common is the need to display product information in the order details:

The module and converter in which the model resides

Since one object is now converted to 3+ objects, objects need to be converted to each other through a Converter/Mapper. The positions of these three objects in the code are also different, which can be summarized as follows:

DTO Assembler: In the Application layer, the Entity to DTO converter has a standard name called DTO Assembler. Martin Fowler’s description of DTO and Assembler in P of EAA: Data Transfer Object. The core role of the DTO Assembler is to convert one or more associated entities into one or more Dtos.

Data Converter: In the Infrastructure layer, Entity to DO Converter does not have a standard name, but in order to distinguish Data Mapper, we call this Converter Data Converter. Note that Data Mapper usually refers to DAO, such as Mybatis Mapper. The source of Data Mapper is also in P of EAA: Data Mapper

When writing an Assembler by hand, we typically implement two types of methods, as follows; The logic of the Data Converter is similar, so I’ll skip it.

Public class DtoAssembler {// Through various entities, DTO public OrderDTO toDTO(Order Order, Item Item) {OrderDTO DTO = new OrderDTO(); dto.setId(order.getId()); dto.setItemTitle(item.getTitle()); / / value from the multiple objects, and is not the same as the field name dto. SetDetailAddress (order. GetAddress. GetDetail ()); // Can read complex nested fields // omit N linesreturndto; } public Item toEntity(ItemDTO ItemDTO) {Item entity = new Item(); entity.setId(itemDTO.getId()); // omit N linesreturnentity; }}Copy the code

We can see that by abstracting an Assembler/Converter object, we can converge the complex conversion logic into one object and unit test it well. This also nicely converges the conversion logic in common code. It is very convenient when used by the caller (please ignore the exception logic) :

public class Application {
    private DtoAssembler assembler;
    private OrderRepository orderRepository;
    private ItemRepository itemRepository;

    public OrderDTO getOrderDetail(Long orderId) {
        Order order = orderRepository.find(orderId);
        Item item = itemRepository.find(order.getItemId());
        returnassembler.toDTO(order, item); // Many of the original complex conversion logic converging to a single line of code}}Copy the code

While Assembler/Converter is a great object to use, writing Assembler/Converter by hand can be time-consuming and bug-prone when the business is complex, so there are multiple solutions for Bean Mapping that are essentially dynamic and static.

Dynamic mapping schemes include the relatively primitive Beanutils.copyProperties, xmL-configurable Dozer, and so on, whose core is dynamic assignment based on reflection at run time. The disadvantages of the dynamic scheme are the large number of reflection calls, poor performance and high memory footprint, which are not suitable for highly concurrent application scenarios.

So for those of you who use Java, I recommend a library called MapStruct. MapStruct generates mapping code statically at compile time through annotations, and the compiled code is exactly the same as the handwritten code in performance, and has powerful annotations and other capabilities. If your IDE supports it, you can even see the compiled mapping code for checking after you compile it. I won’t go into the details of the use of MapStruct here, see the website for more details.

Using MapStruct saves a lot of money and makes the code look like this:

@org.mapstruct.Mapper public interface DtoAssembler {// Notice that this becomes an interface, MapStruct generates the implementation class DtoAssembler INSTANCE = Mappers. GetMapper (dtoassembler.class); @mapping (target =)"itemTitle".source = "item.title")
    @Mapping(target = "detailAddress".source = "order.address.detail") OrderDTO toDTO(Order order, Item item); Item toEntity(ItemDTO ItemDTO) is not required for a line annotation if the field is not inconsistent; }Copy the code

After using MapStruct, you only need to flag the field inconsistencies, and the Convention over Configuration takes care of the rest for you. There are a lot of complicated uses that I’m not going to point out.

Summary of model specifications

From a usage complexity perspective, the distinction between DO, Entity, and DTO resulted in an explosion in code volume (from 1 to 3+2+N). But in real complex business scenarios, the value of differentiating models by function is that functionality is single and testable, predictable, and ultimately reduced in logical complexity.

Repository code specification

The interface specification

As mentioned above, traditional Data Mapper (DAO) belongs to “firmware” and is strongly bound to the underlying implementation (DB, Cache, file system, etc.). If directly used, the code will be “frozen”. Therefore, in order to achieve the “software” nature of Repository design, there are three main points to consider:

1, interface names should not use the syntax of the underlying implementation: insert, SELECT, update, delete are SQL syntax, using these words to bind to the underlying implementation of DB. Instead, we should think of Repository as a neutral collection-like interface with syntax-like find, save, and remove. It is important to note that the distinction between INSERT /add and update is itself strongly bound to the underlying logic. Some stores such as caches do not actually differ between insert and update. In this case, the neutral save interface is used. The DAO’s INSERT or UPDATE interface is then invoked on an implementation-specific basis.

2. Inbound and outbound arguments should not use the underlying data format: Remember that Repository operates on Entity objects (which should actually be Aggregate Root) and should not operate directly on the underlying DO. Further, the Repository interface should actually exist in the Domain layer, with no implementation of DO at all. This is also a strong safeguard against the infiltration of the underlying implementation logic into the business code.

3. Avoid the “generic” Repository pattern: Many ORM frameworks provide a “generic” Repository interface, and the Framework automatically implements the interface through annotations. Typical examples are Spring Data, Entity Framework, etc. The advantage of this Framework is that it can be easily configured in simple scenarios. However, the downside is that there is basically no possibility of extension (such as adding custom cache logic), which may be overridden and reworked in the future. Of course, the avoidance of generality here does not mean that there cannot be basic interfaces and generic help classes, as shown below. Let’s define a basic Repository base interface class and some Marker interface classes:

Public interface Repository<T extends Aggregate<ID>, ID extends Identifier> {/** * Make it traceable. */ void attach(@notnull T aggregate); Void detach(@notnull T Aggregate); void detach(@notnull T Aggregate); /** * Find Aggregate by ID. */ T find(@notnull ID ID); Void remove(@notnull T Aggregate); void remove(@notnull T Aggregate); Void save(@notnull T Aggregate); void save(@notnull T Aggregate); } // Marker interface of Aggregate root public Interface Aggregate<ID extends Identifier> extends Entity<ID> {} // Marker interface of Entity class public interface Entity<ID extends Identifier> extends Identifiable<ID> { } public interface Identifiable<ID extends Identifier> { ID getId(); } public interface Identifier extends Serializable {}Copy the code

The business’s own interface only needs to be extended on the base interface, for example, an order:

Public interface OrderRepository extends Repository<Order, OrderId> { Here OrderQuery is a custom DTO Long Count (OrderQuery Query); // Custom Page<Order> query(OrderQuery query); Order findInStore(OrderId ID, StoreId StoreId); }Copy the code

Each business needs to define various query logic based on its own business scenario.

Again, the Repository interface is in the Domain layer, but the implementation classes are in the Infrastructure layer.

Repository base implementation

Let’s start with the simplest implementation of Repository. Note that OrderRepositoryImpl is in the Infrastructure layer:

Public class OrderRepositoryImpl implements OrderRepository {private final OrderDAO dao; Private Final OrderDataConverter Converter; Public OrderRepositoryImpl(OrderDAO dao) {this.dao = dao; this.converter = OrderDataConverter.INSTANCE; } @Override public Order find(OrderId orderId) { OrderDO orderDO = dao.findById(orderId.getValue());return converter.fromData(orderDO);
    }

    @Override
    public void remove(Order aggregate) {
        OrderDO orderDO = converter.toData(aggregate);
        dao.delete(orderDO);
    }

    @Override
    public void save(Order aggregate) {
        if(aggregate.getId() ! = null && aggregate.getId().getValue() > 0) { // update OrderDO orderDO = converter.toData(aggregate); dao.update(orderDO); }else {
            // insert
            OrderDO orderDO = converter.toData(aggregate);
            dao.insert(orderDO);
            aggregate.setId(converter.fromData(orderDO).getId());
        }
    }

    @Override
    public Page<Order> query(OrderQuery query) {
        List<OrderDO> orderDOS = dao.queryPaged(query);
        long count = dao.count(query);
        List<Order> result = orderDOS.stream().map(converter::fromData).collect(Collectors.toList());
        return Page.with(result, query, count);
    }

    @Override
    public Order findInStore(OrderId id, StoreId storeId) {
        OrderDO orderDO = dao.findInStore(id.getValue(), storeId.getValue());
        returnconverter.fromData(orderDO); }}Copy the code

From the above implementation, you can see the pattern: All Entity/Aggregate will be converted to DO, and then, depending on the business scenario, the DAO method will be called to operate, and then DO will be converted back to Entity if necessary. The code is basically very simple, the only thing to pay attention to is the save method, which needs to determine whether an Aggregate needs to be updated or inserted according to whether the ID of the Aggregate exists and is greater than 0.

Repository complex implementation

The implementation of a Repository for a single Entity is generally simple. However, when it comes to Aggregate roots with multiple entities, it can be troublesome. The main reason is that not all entities in an Aggregate need to be changed in a single operation. This results in a large number of useless DB operations.

Take a common example, in the scenario of master suborders, a master Order Order will contain multiple lineitems of suborders. Suppose that an operation to change the price of a suborder will change the price of the master Order at the same time, but have no effect on other suborders:

Using a very naive implementation would result in two more useless update operations, as follows:

public class OrderRepositoryImpl extends implements OrderRepository { private OrderDAO orderDAO; private LineItemDAO lineItemDAO; private OrderDataConverter orderConverter; private LineItemDataConverter lineItemConverter; @override public void save(Order aggregate) {if(aggregate.getId() ! OrderDO OrderDO = null && aggregate.getid ().getValue() > 0) orderConverter.toData(aggregate); orderDAO.update(orderDO);for(LineItem lineItem: aggregate.getLineItems()) { save(lineItem); }}else}} private void save(LineItem LineItem) {if(lineItem.getId() ! = null && lineItem.getId().getValue() > 0) { LineItemDO lineItemDO = lineItemConverter.toData(lineItem); lineItemDAO.update(lineItemDO); }else{ LineItemDO lineItemDO = lineItemConverter.toData(lineItem); lineItemDAO.insert(lineItemDO); lineItem.setId(lineItemConverter.fromData(lineItemDO).getId()); }}}Copy the code

In this case, four updates would result, but only two would actually be needed. In most cases, this cost is not high and acceptable, but in extreme cases (when there are too many non-Aggregate Root entities), it can result in a large number of useless writes.

Change-tracking Change Tracking

In that case, the core problem is that the Repository interface specification limits the caller to operate only on Aggregate Root, rather than on a non-Aggregate Root Entity. This is very different from calling the DAO directly.

The solution to this is to be able to identify which entities have changed, and to operate only on those entities that have changed, plus the ability to track changes. In other words, the code logic that used to be so artificial can now be implemented automatically through change tracking, so that users really only care about Aggregate operations. In the previous case, with change tracking, the system could determine that only LineItem2 and Order had changed, so only two updates needed to be generated.

There are two major change tracking solutions in the industry:

1. Snapshot based solution: After data is retrieved from DB, a Snapshot is stored in memory and compared with Snapshot when data is written. A common implementation is Hibernate

2. Proxy-based solution: After data is taken out from DB, weaving adds a section to all setters to determine whether setters are called and values are changed. If the values are changed, it is marked as Dirty. Determine whether to update according to Dirty when saving. A common implementation is Entity Framework.

The Snapshot scheme has the advantages of simplicity, cost of full Diff operation (usually Reflection) on each save, and memory consumption of saving Snapshot. The advantage of Proxy scheme is that it has high performance and almost no increased cost, but the disadvantage is that it is difficult to implement, and it is difficult to detect the changes of nested objects (such as the increase and deletion of sublists) when nesting relationships exist, which may lead to bugs.

Due to the complexity of Proxy solutions, the Snapshot solution is widely used in the industry (including EF Core). Another benefit of this is that Diff allows you to discover which fields have changed and then UPDATE only the changed fields, again reducing the cost of UPDATE.

Here I briefly post our own implementation of Snapshot. The code is not complicated, and it is also easy for each team to implement by themselves. Some of the code is for reference only:

DbRepositorySupport

// This class is a generic support class to reduce developer duplication. Public Abstract Class DbRepositorySupport<T extends Aggregate<ID> ID extends Identifier> implements Repository<T, ID> { @Getter private final Class<T> targetClass; // AggregateManager to maintain snapshot.getter (accesslevel.protected) private AggregateManager<T, ID> AggregateManager; protected DbRepositorySupport(Class<T> targetClass) { this.targetClass = targetClass; this.aggregateManager = AggregateManager.newInstance(targetClass); } /** * these methods should be implemented by inherited subclasses */ protected abstract void onInsert(T aggregate); protected abstract T onSelect(ID id); protected abstract void onUpdate(T aggregate, EntityDiff diff); protected abstract void onDelete(T aggregate); */ @override public void Attach (@notnull T Aggregate) { this.aggregateManager.attach(aggregate); } @override public void Detach (@notnull T Aggregate) { this.aggregateManager.detach(aggregate); } @Override public T find(@NotNull ID id) { T aggregate = this.onSelect(id);if(aggregate ! < span style = "box-sizing: border-box! Important; word-wrap: break-word! Important;" // If you implement a custom query interface, remember to call ATTACH separately. this.attach(aggregate); }returnaggregate; } @Override public void remove(@NotNull T aggregate) { this.onDelete(aggregate); // Delete to stop tracking this.detach(aggregate); } @override public void save(@notnull T aggregate) {// If there is no ID, insert it directlyif (aggregate.getId() == null) {
            this.onInsert(aggregate);
            this.attach(aggregate);
            return; } / / do Diff EntityDiff Diff = aggregateManager. DetectChanges (aggregate);if (diff.isEmpty()) {
            return; } // call UPDATE this.onUpdate(aggregate, diff); AggregateManager AggregateManager. Merge (aggregate); }}Copy the code

The user only needs to inherit DbRepositorySupport:

public class OrderRepositoryImpl extends DbRepositorySupport<Order, OrderId> implements OrderRepository { private OrderDAO orderDAO; private LineItemDAO lineItemDAO; private OrderDataConverter orderConverter; private LineItemDataConverter lineItemConverter; @override protected void onUpdate(Order aggregate, EntityDiff diff) {if (diff.isSelfModified()) {
            OrderDO orderDO = converter.toData(aggregate);
            orderDAO.update(orderDO);
        }

        Diff lineItemDiffs = diff.getDiff("lineItems");
        if (lineItemDiffs instanceof ListDiff) {
            ListDiff diffList = (ListDiff) lineItemDiffs;
            for (Diff itemDiff : diffList) {
                if (itemDiff.getType() == DiffType.Removed) {
                    LineItem line = (LineItem) itemDiff.getOldValue();
                    LineItemDO lineDO = lineItemConverter.toData(line);
                    lineItemDAO.delete(lineDO);
                }
                if (itemDiff.getType() == DiffType.Added) {
                    LineItem line = (LineItem) itemDiff.getNewValue();
                    LineItemDO lineDO = lineItemConverter.toData(line);
                    lineItemDAO.insert(lineDO);
                }
                if (itemDiff.getType() == DiffType.Modified) {
                    LineItem line = (LineItem) itemDiff.getNewValue();
                    LineItemDO lineDO = lineItemConverter.toData(line);
                    lineItemDAO.update(lineDO);
                }
            }
        }
    }
}
Copy the code

The implementation of AggregateManager mainly uses ThreadLocal to avoid the situation that multiple threads share the same Entity

class ThreadLocalAggregateManager<T extends Aggregate<ID>, ID extends Identifier> implements AggregateManager<T, ID> {

    private ThreadLocal<DbContext<T, ID>> context;
    private Class<? extends T> targetClass;

    public ThreadLocalAggregateManager(Class<? extends T> targetClass) {
        this.targetClass = targetClass;
        this.context = ThreadLocal.withInitial(() -> new DbContext<>(targetClass));
    }

    public void attach(T aggregate) {
        context.get().attach(aggregate);
    }

    @Override
    public void attach(T aggregate, ID id) {
        context.get().setId(aggregate, id);
        context.get().attach(aggregate);
    }

    @Override
    public void detach(T aggregate) {
        context.get().detach(aggregate);
    }

    @Override
    public T find(ID id) {
        return context.get().find(id);
    }

    @Override
    public EntityDiff detectChanges(T aggregate) {
        return context.get().detectChanges(aggregate);
    }

    public void merge(T aggregate) {
        context.get().merge(aggregate);
    }
}

class DbContext<T extends Aggregate<ID>, ID extends Identifier> {

    @Getter
    private Class<? extends T> aggregateClass;

    private Map<ID, T> aggregateMap = new HashMap<>();

    public DbContext(Class<? extends T> aggregateClass) {
        this.aggregateClass = aggregateClass;
    }

    public void attach(T aggregate) {
        if(aggregate.getId() ! = null) {if(! aggregateMap.containsKey(aggregate.getId())) { this.merge(aggregate); } } } public void detach(T aggregate) {if(aggregate.getId() ! = null) { aggregateMap.remove(aggregate.getId()); } } public EntityDiff detectChanges(T aggregate) {if (aggregate.getId() == null) {
            return EntityDiff.EMPTY;
        }
        T snapshot = aggregateMap.get(aggregate.getId());
        if (snapshot == null) {
            attach(aggregate);
        }
        return DiffUtils.diff(snapshot, aggregate);
    }

    public T find(ID id) {
        return aggregateMap.get(id);
    }

    public void merge(T aggregate) {
        if(aggregate.getId() ! = null) { T snapshot = SnapshotUtils.snapshot(aggregate); aggregateMap.put(aggregate.getId(), snapshot); } } public voidsetId(T aggregate, ID id) {
        ReflectionUtils.writeField("id", aggregate, id); }}Copy the code

Run a single test (note that in this case I merged Order and LineItem into a single table) :

@Test
public void multiInsert() {
    OrderDAO dao = new MockOrderDAO();
    OrderRepository repo = new OrderRepositoryImpl(dao);

    Order order = new Order();
    order.setUserId(new UserId(11L));
    order.setStatus(OrderState.ENABLED);
    order.addLineItem(new ItemId(13L), new Quantity(5), new Money(4));
    order.addLineItem(new ItemId(14L), new Quantity(2), new Money(3));

    System.out.println("Before the first preservation");
    System.out.println(order);

    repo.save(order);
    System.out.println("After the first preservation");
    System.out.println(order);

    order.getLineItems().get(0).setQuantity(new Quantity(3));
    order.pay();
    repo.save(order);

    System.out.println("After the second preservation");
    System.out.println(order);
}
Copy the code

Single test results:

Order(id=null, userId=11, LineItem =[LineItem(ID =null, itemId=13, quantity=5, price=4), LineItem(id=null, itemId=14, quantity=2, price=3)], status=ENABLED) INSERT OrderDO: OrderDO(id=null, parentId=null, itemId=0, userId=11, quantity=0, price=0, status=2) UPDATE OrderDO: OrderDO(id=1001, parentId=1001, itemId=0, userId=11, quantity=0, price=0, status=2) INSERT OrderDO: OrderDO(id=null, parentId=1001, itemId=13, userId=11, quantity=5, price=4, status=2) INSERT OrderDO: OrderDO(id=null, parentId=1001, itemId=14, userId=11, quantity=2, price=3, status=2) Order(id=1001, userId=11, lineItems=[LineItem(id=1002, itemId=13, quantity=5, price=4), LineItem(id=1003, itemId=14, quantity=2, price=3)], status=ENABLED) UPDATE OrderDO: OrderDO(id=1001, parentId=1001, itemId=0, userId=11, quantity=0, price=0, status=3) UPDATE OrderDO: OrderDO(id=1002, parentId=1001, itemId=13, userId=11, quantity=3, price=4, status=3) Order(id=1001, userId=11, lineItems=[LineItem(id=1002, itemId=13, quantity=3, price=4), LineItem(id=1003, itemId=14, quantity=2, price=3)], status=PAID)Copy the code

Other Matters needing attention

Concurrent optimistic lock

In the case of high concurrency, if the change-tracking method is used, the Snapshot data in local memory may be inconsistent with the DB data, resulting in concurrency conflicts. In this case, the optimistic lock should be added to the update. Of course, the Best Practice for normal database operations should also have optimistic locks, but in this case, you need to remember to update the local Snapshot values after optimistic lock conflicts.

A possible BUG

This is not actually a bug, but it should be noted that a side effect of using Snapshot is that DB is not actually updated if Entity is not updated and the save method is called. This logic is consistent with Hibernate’s logic and is a natural feature of the Snapshot method. If you want to force updates to DB, it is recommended to manually change a field such as gmtModified and then call save.

Repository Migration Path

Using the Repository pattern in our everyday code is a simple, yet highly rewarding thing to do. The biggest benefit is that it can be completely decoupled from the bottom layer, so that the upper layer of the business can quickly grow by itself.

Let’s assume that the existing traditional code contains the following classes (again using the order example) :

1, OrderDAO 2, OrderDAO

The Repository pattern can be implemented incrementally through the following steps:

Create OrderDataConverter (MapStruct) in 2 lines of code Ensure that the conversion between Order and OrderDO is 100% correct Ensure the correctness of OrderRepository by single test 5. Change the original code where OrderDO is used to Order 6. Change the original code where OrderDAO is used to OrderRepository 7.

Congratulations to you! From now on the Order entity class and its business logic can be changed at will. The only thing you need to do with each change is to change the Converter, which is completely decoupled from the underlying implementation.

Write in the back

Thank you for having the patience to see DDD’s true love here. One question, do you make extensive use of DDD architecture in your daily work to advance your business? Do you have an environment where you can apply what you’ve learned in the real world?