Public account: MarkerHub (pay attention to get more project resources)

Eblog codebase: github.com/markerhub/e…

Eblog project video: www.bilibili.com/video/BV1ri…


Development Document Catalog:

(Eblog) 1. Set up the project architecture and initialized the home page

(EBLOG) 2. Integrate Redis and project elegant exception handling and return result encapsulation

(eblog) 3, using Redis zset ordered set to achieve a hot feature of the week

(EBLOG) 4, customize Freemaker label to achieve blog homepage data filling

(Eblog) 5, blog classification filling, login registration logic

(EBLOG) 6. Set up blog publishing collection and user center

(eblog) 7, message asynchronous notification, details adjustment

8. Blog search engine development and background selection

(Eblog) 9. Instant group chat development, chat history, etc


Front and back end separation project vueblog please click here: super detailed! 4 hours to develop a SpringBoot+ Vue front and back separated blog project!!


1. Details adjustment

This time, we will fix the bug and adjust some details, because the function of blog is not much, the business logic is not complex, we also have search, group chat and other functions, are big modules.

The article collects

Article collection js in fact has been written, but some conditions have not triggered it, what is the condition, we first to find the collection of JS first:

  • static/res/mods/jie.js

You can see what triggers the loading of a collection under two conditions:

  • Whether there is an element with the ID LAY_jieAdmin

  • Layui.cache.user. uid Is -1

LAY_jieAdmin specifies that only the article details page loads this JS, and other pages do not. Where is layui.cache.user.uid set? Remember when we first started to module HTML, we had js in layout. FTL macro, and the original uid value was -1, so we needed to attach the value after login.

  • templates/inc/layout.ftl
<script>
    layui.cache.page = 'jie';
    layui.cache.user = {
        username: '${profile.username!" Tourists "} '
        ,uid: ${profile.id!'-1'}
        ,avatar: '${profile.avatar!" /res/images/avatar/00.jpg"}'
        ,experience: 0
        ,sex: '${profile.sex! 'The unknown'} '
    };
    layui.config({
        version: "3.0.0"
        ,base: '/res/mods/'
    }).extend({
        fly: 'index'
    }).use('fly').use('jie').use('user');
</script>

Copy the code

${profile.id! ${profile.id! What does ‘-1’} mean?! The default value that follows is null. Ok, refresh after the modification, you will find a pop-up prompt “request is abnormal, please try again”, we leave it for the moment, first finish the favorites function, let’s see if the favorites function controller has not caused the problem.

As you can see from the image above, I’ve changed the link to view favorites

  • /collection/find/

Function code is actually very simple, from the UserCollection table to query whether there is a record, if there is an indication of the collection, JS will render the button to cancel the collection, if there is no record, will render the button to render the collection.

  • com.example.controller.PostController
@ResponseBody
@PostMapping("/collection/find/")
public Result collectionFind(Long cid) {
    int count = userCollectionService.count(new QueryWrapper<UserCollection>()
            .eq("post_id", cid)
            .eq("user_id", getProfileId()));
    return Result.succ(MapUtil.of("collection", count > 0));
}

Copy the code

According to js, I directly return data to put a parameter collection is true. Render as follows:

Then click on the button and there are two links (I changed the prefix) :

  • /collection/add/

  • /collection/remove/

They represent this and unfavorites, so we’re going to write these two controllers separately, and notice that they’re ajax requests. The logic of collection is also relatively simple, first judge whether it has been collected, has been collected on the return hint has been collected, not collected on the addition of a day record.

  • com.example.controller.PostController
@ResponseBody
@PostMapping("/collection/add/") public Result collectionAdd(Long cid) { Post post = postService.getById(cid); Assert.isTrue(post ! = null,"The post has since been removed.");
    int count = userCollectionService.count(new QueryWrapper<UserCollection>()
            .eq("post_id", cid)
            .eq("user_id", getProfileId()));
    if(count > 0) {
        return Result.fail("You have a collection.");
    }
    UserCollection collection = new UserCollection();
    collection.setUserId(getProfileId());
    collection.setCreated(new Date());
    collection.setModified(new Date());
    collection.setPostId(post.getId());
    collection.setPostUserId(post.getUserId());
    userCollectionService.save(collection);
    return Result.succ(MapUtil.of("collection".true));
}

Copy the code
  • com.example.controller.PostController

Unbookmarking logic: Delete a record

@ResponseBody
@PostMapping("/collection/remove/") public Result collectionRemove(Long cid) { Post post = postService.getById(Long.valueOf(cid)); Assert.isTrue(post ! = null,"The post has since been removed.");
    boolean hasRemove = userCollectionService.remove(new QueryWrapper<UserCollection>()
            .eq("post_id", cid)
            .eq("user_id", getProfileId()));
    return Result.succ(hasRemove);
}

Copy the code

Ok, 3 methods of collection design have been developed, click on the details page of the article favorites and unfavorites, can normally execute the code! No bug ~

The message unread

When you refresh the page, you still get a pop-up prompt. What’s the problem? The browser opens F12, switches to the Network TAB, and since we assume that some asynchronous request is out of order, the popover prompt is triggered. Next we need to find the request. Under Network, we click XHR, because this represents the link to the asynchronous request. From here we see a nums/request is 404, the specific request is: http://localhost:8080/message/nums/,

We then search/message/nums globally to find the js where the asynchronous request originated:

So we’re pretty sure that’s what caused this popover. So let’s write down this method. This is a new message notification, we had a message for me in the user center before, but there seems to be no status (read and unread), so I need to add a status field to the UserMessage to identify read and unread. Remember to add fields to the database.

  • com.example.entity.UserMessage
/** * Status: 0 unread, 1 read */ private Integer status;Copy the code

Then we look at the current user status is not 0 the number of messages, is the number of new message notifications.

  • com.example.controller.IndexController
@ResponseBody
@PostMapping("/message/nums/")
public Object messageNums() throws IOException {
    int count = userMessageService.count(new QueryWrapper<UserMessage>()
            .eq("to_user_id", getProfileId())
            .eq("status", 0));return MapUtil.builder().put("status", 0).put("count", count).build();
}

Copy the code

Return value is what I calculate according to JS, JS needs what result I return what result. After re-running the code, we find that the popover is gone and the page looks like this:

alerts

Now that the notification of new messages is ok, let’s make it a bit more sophisticated. When we browse websites such as Weibo, Jane, books and Headlines, if we receive a message notification, we generally do not need to refresh the page, but show us a message is coming, and suddenly there will be a new message notification icon to remind us. How do we do this, combined with the knowledge we have learned before. We can find several schemes to implement this feature:

  • Ajax timed load refresh

  • Websocket duplex communication

  • Long links

We have a class on Websocket, and we will use this technology to implement this functionality.

Students can review the knowledge of Websocket:

  • Gitee.com/lv-success/…

Above is a demo of SpringBoot integrating WS, and the next steps to install this example integrate WS into our existing project.

Step 1: Import the JAR package

  • pom.xml
<! -- ws --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-websocket</artifactId> </dependency>Copy the code

Step 2: Write the WS configuration

  • com.example.config.WebSocketConfig
@ Configuration @ EnableWebSocketMessageBroker / / comment said open use STOMP protocol to transfer messages based on agent, Broker's agent. Public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {/ * * * * * register Stomp endpoint addEndpoint: Add the STOMP endpoint. Provide WebSocket or SockJS client access to the address * withSockJS: Use SockJS * @param Registry */ public void registerStompEndpoints(StompEndpointRegistry Registry) { registry.addEndpoint("/websocket") .withSockJS(); } /** * Configure the message Broker * start the Broker, Public void configureMessageBroker(MessageBrokerRegistry Registry) {Public void configureMessageBroker(MessageBrokerRegistry Registry) registry.enableSimpleBroker("/user/"."/topic"); / / push message prefix registry. SetApplicationDestinationPrefixes ("/app"); }}Copy the code

Let’s parse this is what meaning, first @ EnableWebSocketMessageBroker, springboot manual configuration you remember? This is also open ws message broker, and then inherited WebSocketMessageBrokerConfigurer rewrite registerStompEndpoints and configureMessageBroker method, The registerStompEndpoints method registers endpoints, addEndpoint(“/ webSocket “) registers an endpoint called webSocket, and the front-end can connect to the server through this link for duplex communication. .withsockjs () means to use the SockJs protocol, to recap:

  • SockJs is a solution for browsers that do not support WS

  • Stompjs is a format that simplifies text transfer

ConfigureMessageBroker configureMessageBroker/user /topic Messages from the front-end/APP link prefix go to the message broker.

With these two steps in place, we are ready to use WS. Let’s write the front end first:

Since our message notification is in the header user name, all pages have it, so we put js on layout.ftl.

$(function () {
    var elemUser = $('.fly-nav-user');
    if(layui.cache.user.uid ! == -1 && elemUser[0]){ var socket = new SockJS("/websocket");
        stompClient = Stomp.over(socket);
        stompClient.connect({},function(frame) {//subscribe subscribe stompClient.subscribe('/user/' + ${profile.id} + '/messCount'.function(res) { showTips(res.body); }}})}));Copy the code

Var socket = new SockJS(“/websocket”); StompClient = stomp.over (socket); Indicates that stomP text transfer protocol is used to transmit content. Stompclient. connect means the method that establishes the connection trigger, and there’s a stompClient.subscribe, which basically means to subscribe to this message queue, When the back end sends a message to/user/{userId}/messCount, the current user receives the message, res.body is what is returned, and then the showTips method, which renders the notification of the new message, Let’s copy the corresponding js from the previous new message notification:

function showTips(count) {
    var msg = $(' '+ count +'</a>');
    var elemUser = $('.fly-nav-user');
    elemUser.append(msg);
    msg.on('click'.function(){
        location.href = '/center/message/';
    });
    layer.tips('you are'+ count +'Unread message', msg, {
        tips: 3
        ,tipsMore: true
        ,fixed: true
    });
    msg.on('mouseenter'.function(){
        layer.closeAll('tips'); })}Copy the code

Ok, so we can connect to ws to implement duplex communication, and listen to the queue/user/{userId}/messCount, so the back end sends messages to this front end can receive and implement showTips method. When should the back end send messages to the front end?

  • Someone commented on the author’s article or responded to the author’s comments

  • System messages, etc.

Ok, let’s start by writing a wsService that sends the number of messages to the front end.

  • com.example.service.WsService
void
 sendMessCountToUser
(
Long
 userId
,
 
Integer
 count
);
Copy the code

Is his implementation class complex? It’s not really complicated, so let’s look at the parameter, userId, which is the limit to who to send a message to, and count is the number of messages, so we’re dealing with a lot of different things here, but if count is not empty, we’re going to return the number of messages, and when count is empty, We search for the number of all unread messages for userId and return.

  • com.example.service.impl.WsServiceImpl
@Slf4j @Service public class WsServiceImpl implements WsService { @Autowired private SimpMessagingTemplate messagingTemplate ; @Autowired UserMessageService userMessageService; /** * @param userId * @param count */ @async public void sendMessCountToUser(Long userId, Integer count) {if(count == null) {
            count = userMessageService.count(new QueryWrapper<UserMessage>()
                    .eq("status", 0)
                    .eq("to_user_id", userId));
        }
        this.messagingTemplate.convertAndSendToUser(userId.toString(),"/messCount", count);
        log.info("Ws sent message successfully ------------> {}, quantity: {}", userId, count); }}Copy the code

To send a WS message using SimpMessagingTemplate, convertAndSendToUser will automatically prefix it with/user, then userId, followed by the suffix/messCount, So the total link is/user/{userId}/messCount, so we call this method where we need to send the message. The other important thing is that I’m using @async for Async, which means that a new thread is created to execute this method so that it doesn’t affect the caller’s transaction, execution time, etc.

So let’s talk about @async

Asynchronous @ Async

Actually, I wanted to do it in a queue, which is also asynchronous. In order to expose students to more knowledge, we will use @aysnc annotation to achieve, later we will use MQ, students don’t worry.

With this annotation we need to enable asynchronous configuration. Annotations are @ EnableAsync

  • com.example.config.AsyncConfig
@EnableAsync
@Configuration
public class AsyncConfig {
    @Bean
    AsyncTaskExecutor asyncTaskExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(100);
        executor.setQueueCapacity(25);
        executor.setMaxPoolSize(500);
        returnexecutor; }}Copy the code

So with the @enableAsync annotation we can now use the @aysnc annotation to implement async. asyncTaskExecutor() is actually what I’m using to override asyncTaskExecutor, defining the maximum thread group and so on. In addition Async can also configure a lot of information, such as asynchronous thread error processing (retry, etc.), you can query more information after class, this annotation I use in the work is actually more. Ok, above we have changed the method of sending WS messages to an asynchronous method, which will start a thread to perform the sending. The place we need to call now is actually in the comments.

  • com.example.controller.PostController#reply(Long, Long, String)

So that when someone comments on an article or replies to a comment, they can receive a message in real time. Let’s demonstrate the effect:

At this point, we have implemented the legendary real-time notification feature! It’s going to swell

Article reading

The next task is to improve the reading of articles. Before accessing the article, reading did not increase, now let’s fix this bug. So how do we do that? Do we directly modify the database every time we access it? Here we use caching to solve this problem. On each access, we simply increase the number of reads cached by one and then synchronize them to the database at some point. Access to the article, we put the amount of reading in the cache to the vo, the specific need, written before we find com. Example. Controller. PostController# view method, and then I added this code:

The technique changes the vo viewCount value to the number of caches.

  • com.example.service.impl.PostServiceImpl#setViewCount
@Override
public void setViewCount(Post Post) {// Get the number of reads from the cache Integer ViewCount = (Integer) redisutil.hget ("rank_post_" + post.getId(), "post:viewCount");
    if(viewCount ! = null) { post.setViewCount((Integer) viewCount + 1); }else{ post.setViewCount(post.getViewCount() + 1); } // Set the new reading redisutil.hset ("rank_post_" + post.getId(), "post:viewCount", post.getViewCount());
}

Copy the code

As you can see from the code, we first fetch the ViewCount from the cache, then set post.setViewCount, and finally synchronize the increment value to Redis. Ok, this step is relatively simple, next we need to start a timer, and then periodically synchronize the cache read to the database, data synchronization.

  • com.example.schedules.ScheduledTasks
@Slf4j @Component public class ScheduledTasks { @Autowired RedisUtil redisUtil; @Autowired private RedisTemplate redisTemplate; @Autowired PostService postService; /** ** Scheduled(cron = Scheduled"0, 0, 2 * *?)
    @Scheduled(cron = "0/1 * * * *")// One minute (test) public voidpostViewCountSync() {
        Set<String> keys = redisTemplate.keys("rank_post_*");
        List<String> ids = new ArrayList<>();
        for (String key : keys) {
            String postId = key.substring("rank_post_".length());
            if(redisUtil.hHasKey("rank_post_" + postId, "post:viewCount")){ ids.add(postId); }}if(ids.isEmpty()) return;
        List<Post> posts = postService.list(new QueryWrapper<Post>().in("id", ids));
        Iterator<Post> it = posts.iterator();
        List<String> syncKeys = new ArrayList<>();
        while (it.hasNext()) {
            Post post = it.next();
            Object count =redisUtil.hget("rank_post_" + post.getId(), "post:viewCount");
            if(count ! = null) { post.setViewCount(Integer.valueOf(count.toString())); syncKeys.add("rank_post_" + post.getId());
            } else{// no synchronization required}}if(posts.isEmpty()) return;
        boolean isSuccess = postService.updateBatchById(posts);
        if(isSuccess) {
            forRedisutil.hdel (Post Post: posts) {// Delete the number of reads in the cache to prevent resynchronization (if necessary)."rank_post_" + post.getId(), "post:viewCount");
            }
        }
        log.info("Synchronous article reading success ------> {}", syncKeys); }}Copy the code

Why do we use the keys command to get all the lists that need to be read synchronously? Actually, when the redis cache becomes larger and larger, we can no longer use the keys command, because the keys command will retrieve all the keys, which is a time-consuming process, and Redis is a single-threaded middleware. The execution of other commands is affected. So theoretically we need to use the scan command. Considering that blog is just a simple business, redis is not very big, so we directly use keys command, you can optimize later. Once you get the list, then you get all the entities, and then you update the read volume in batches.