Editor’s note: ThinkJS, as a node. js high-performance enterprise-class Web framework, has received more and more users like. Today we invite ThinkJS user @lscho to share his experience of developing a CMS blog system based on ThinkJS. Let’s take a look at what ThinkJS and Vue.js can do.

preface

Some time ago, I used my spare time to rewrite the blog. Besides realizing the basic article system and comment system of the blog, I also completed a simple plug-in system. Blog uses ThinkJS to complete the server function, vue. js to complete the separation of the front and back end of the background management function, and the blog front part of the search engine to consider the problem, or put in the server to do rendering. Here, record the main functions and problems encountered.

Functional analysis

A complete blog system probably needs user login, article management, tagging, categorization, comments, custom configuration, etc. According to these functions, it is preliminarily estimated that these tables are needed:

  1. The article table
  2. Comments on the table
  3. Article classification table
  4. The label table
  5. Article and Category mapping table (one-to-many)
  6. Article and tag mapping table (many-to-many)
  7. The configuration table
  8. The users table

A total of 8 tables, and then reference Typecho design, combined with ThinkJS model association function, do a simplification, classification table and label table merge, two mapping table merge, finally get the following 6 table design scheme.

Content table - Content Relationship table - Relationship Project table - meta Comment table - Comment Configuration table - config user table - userCopy the code

ThinkJS model association function can be very convenient to deal with the classification and label relationship of this table structure. For example, we write the following association relationship in the content model, namely SRC /model/content.js, so that the classification and label data can be found when using the model to query the article, instead of manually executing multiple queries.

get relation() {
    return {
        category: {
            type: think.Model.BELONG_TO,
            model: 'meta'.key: 'category_id'.fKey: 'id'.field: 'id,name,slug,description,count'
        },
        tag: {
            type: think.Model.MANY_TO_MANY,
            model: 'meta'.rModel: 'relationship'.rfKey: 'meta_id'.key: 'id'.fKey: 'content_id'.field: 'id,name,slug,description,count'}}; }Copy the code

Interface authentication

Once the table structure is designed, all that remains is to develop the interface. Interface aspect because use RESTful interface specification, so basically is CURD function, the specific table is not much, here we mainly talk about how to verify permissions on all interfaces.

Because the background part is separated from the front and back ends, JWT authentication is used for the authentication part. JWT has probably known about it and implemented similar functions before. After searching for it, IT found the package Node-JsonWebToken. It is very simple to use, mainly encryption and decryption functions, and it runs successfully after a lot of trouble.

I happened to go to the ThinkJS repository and found think-session-jwt, which is also based on Node-JsonWebToken. This is much more useful and can be generated and validated by using the Ctx. session method of ThinkJS. The tokenType parameter determines how to obtain the token. In this case, I use header, which means that the token will be retrieved from the header of each request. The key value is the configured tokenName.

Back-end permission authentication

Because the API interface follows a RESTful style, and there is no complex concept of role permissions, simple requests to non-GET types verify that the token is valid, and the ThinkJS controller provides the front operation __before. Make a logical check in SRC /controller/rest.js, and then proceed.

async __before() {
    this.userInfo = await this.session('userInfo').catch(_= > ({}));
    
    const isAllowedMethod = this.isMethod('GET');
    const isAllowedResource = this.resource === 'token';
    constisLogin = ! think.isEmpty(this.userInfo);
    
    if(! isAllowedMethod && ! isAllowedResource && ! isLogin) {return this.ctx.throw(401.'Please login after operation'); }}Copy the code

Node-jsonwebtoken throws an exception when the token is incorrect, so try catch is used here.

Front-end identity failure detection

For security purposes, our tokens are generally set to expire, so there are three cases we need to deal with.

  1. If the token does not exist, it is easy to handle. Check whether the token exists in the pre-operation of the route. If the token exists, the route is allowed
beforeEnter:(to, from, next) = >{
    if(! localStorage.getItem('token')){
        next({ path: '/login' });
    }else{ next(); }}Copy the code

2. The token error. This requires back-end detection to know whether the token is valid or not. In this case, the server will return the 401 status code after detection failure for front-end recognition. We can just check in the axios request response interceptor, because the 4XX status code will throw an exception, so the code looks like this

axios.interceptors.response.use(data= > {
    // The successful request can be processed in a variety of ways
    return data;
},error=>{
    if (error.response) {
        switch (error.response.status) {
            case 401:
                store.commit("clearToken");
                router.replace("/login");
            break; }}return Promise.reject(error.response.data)
})
Copy the code

3. The token expired. This case can also be ignored, as we have already determined in axios’s response interceptor that a return status code of 401 will also redirect to the login page. However, in the actual use, the experience is not good, because the token in the client is saved in localStorage and will not be automatically cleaned up. Therefore, if we open the background directly after the token expires, the interface will display the background first, and then request to return to 401, and the page will jump to the login interface. Including Ali cloud console, seven niuyun console and other similar authentication methods actually exist this phenomenon, for obsessive-compulsive disorder may be a little uncomfortable. This situation can be resolved.

Let’s take a look at JWT, which covers usage. Header Header, Payload Payload, Signature Signature. In addition to the Header and Signature, the Payload is a piece of plaintext data transcoded in Base64. And that contains the information that we set up, usually with an expiration date. Check whether the token is expired during the route preloading operation. In this way, the problem of two page hops can be avoided. Payload decode Payload

{"userInfo": {"id":1},"iat":1534065923."exp":1534109123}
Copy the code

You can see that exp is the expiration time, and you can determine whether it is expired or not.

let tokenArray = token.split('. ')
if(tokenArray.length ! = =3) {
    next('/login')}let payload = Base64.decode(tokenArray[1])
if (Date.now() > payload.exp * 1000) {
    next('/login')}Copy the code

Payload is clear text data, so don’t store sensitive data in JWT.

Plug-in mechanism

In addition to the normal add, delete, change and check functions, I also implemented a simple plug-in mechanism in my blog system, which is convenient for me to decouple the code and improve the flexibility of the code. For example, sometimes we extend a lot of functionality to a particular point. For example, after user comments, we might need to update the cache, email notifications, post comment count updates, etc. We might write code like this.

let insertId = await model.add(data);
if(insertId){
    await this.updateCache();
    await this.push(); . }Copy the code

Once these methods are changed later, it becomes too cumbersome to modify. Those of you who have used PHP blogging system know that plug-ins are powerful and convenient, so I decided to implement a plug-in function.

The desired function is to leave a marker (commonly known as a hook) at a point in the program that can be extended as follows.

let insertId = await model.add(data);
if(insertId){
    await this.hook('commentCreate',data);
}
Copy the code

Because the program is for their own use, but to facilitate their own later expansion function, only need to achieve the core function. So instead of adding a directory as a plugin directory, we put it under SRC /service/, which conforms to the ThinkJS file structure, and then made a convention. As long as the js file is under SRC /service/ and the registerHook method is available, it can be called as a plug-in. If the SRC /service/email.js file is used to handle email notifications, add a method to it:

static registerHook() {
    return {
        'comment': ['commentCreate']}; }Copy the code

This means that at the function point commentCreate, the comment method of SRC /service/email.js is called.

We then extend the Controller by adding a hook method that calls the corresponding plug-in based on different identifiers. We can simply iterate through SRC /service/ to find the corresponding file and call its method. However, considering the potential exceptions and performance cost of file traversal, I shifted this functionality to detecting plug-ins at service startup and saving them in the configuration. Take a look at the ThinkJS process. You can put it in the SRC /bootstrap/worker.js file. The general code is as follows.

const hooks = [];

for (const Service of Object.values(think.app.services)) {
  const isHookService = think.isFunction(Service.registerHook);
  if(! isHookService) {continue;
  }

  const service = new Service();
  const serviceHooks = Service.registerHook();
  for (const hookFuncName in serviceHooks) {
    if(! think.isFunction(service[hookFuncName])) {continue;
    }
    
    let funcForHooks = serviceHooks[hookFuncName];
    if (think.isString(funcForHooks)) {
      funcForHooks = [funcForHooks];
    }
    
    if(! think.isArray(funcForHooks)) {continue;
    }
    
    for (const hookName of funcForHooks) {
      if(! hooks[hookName]) { hooks[hookName] = []; } hooks[hookName].push({ service,method: hookFuncName });
    }
  }
}
think.config('hooks', hooks);
Copy the code

The plugin list is then iterated and executed in the hooks in SRC /extend/controller.js.

//src/extend/controller.js
module.exports = {
    asynchook(... args) {const { hooks } = think.config();
        const hookFuncs = hooks[name];
        if(! think.isArray(hookFuncs)) {return;
        }
        for(const {service, method} of hookFuncs) {
            await service[method](...args);
        };
    }
}
Copy the code

At this point, the simple plug-in functionality is complete.

It’s also easy to implement full plugins like WordPress and Typecho. Add a plugin manager in the background, you can upload, and then add an activation function and a disable function to the plugin. Click activate and Disable in plug-in management to invoke these two methods, save the default configuration, and so on. If the plug-in needs to create a data table, you can execute the related SQL statement in the activation function. After activation, restart the process for the code to take effect. For the restart function, see How can a child Process tell the main process to restart the service?

other

There are more or less some problems in the development process of the project. Here I also share some of the problems I encountered, hoping to help you.

Editor and file upload

The Markdown editor is conveniently configured with mavonEditor, but without further ado, there is a problem with file uploads.

The front-end code

<mavon-editor ref=md @imgAdd="imgAdd" class="editor" v-model="formItem.content"></mavon-editor>
Copy the code
imgAdd(pos, $file){
   var formdata = new FormData();
   formdata.append('image', $file); 
   image.upload(formdata).then(res= >{
        if(res.errno==0&&res.data.url){
            this.$refs.md.$img2Url(pos, res.data.url); }}); }Copy the code

The back-end processing

const file = this.file('image');
const extname=path.extname(file.name);
const filename = path.basename(file.path);
const basename=think.md5(filename)+extname;
const savepath = '/upload/'+basename;
const filepath = path.join(think.ROOT_PATH, "www"+savepath);
think.mkdir(path.dirname(filepath));
await rename(file.path, filepath);
Copy the code

In Windows, a temporary directory may not be in the same drive as the project directory. If the temporary directory is moved, an Error will be raised: EXDEV, cross-device link not permitted to move: In this case, a student could only read and write a file. So we use a try catch to catch exceptions, mainly because ThinkJS will put uploaded files in a temporary directory first. For questions about rename across disks, see github.com/nodejs/node… The operating system limits rename to simply rename the path to reference the address, but does not move the data over. Renaming cannot operate across file systems, so you need to copy and then delete old data.

Payload is the middleware that handles the upload. You can set the payload as a temporary directory under the project. This ensures that the temporary directory is in the same drive as the project directory.

{
	handle: 'payload',
	options: {
		uploadDir: path.join(think.ROOT_PATH, 'runtime/data')
	}
}
Copy the code

This allows you to use rename directly.

IView loads on demand

Because iView is fully loaded as a plug-in by default, the packaged file is quite large. Need to adjust to load on demand. According to www.iViewui.com/docs/guide/. There is a problem with NPM run build execution. ERROR in js/index.c26f6242.js? From UglifyJs looks like this, look at the error cause, probably because after loading on demand, it is directly loaded under the SRC js file of iView module, which uses ES6 syntax, resulting in compression failure. Went to Issue search, found a solution github.com/iView/iView…

The deployment of

If the front and back ends are not separated, use Webpack to compile the front-end entry page index.html to the home page template of the ThinkJS backend project, and then compile the resources to the backend project resources folder, and set the corresponding path. It is ok to integrate the front-end project into the back-end project and then deploy it as ThinkJS.

If the front and back ends are separated and deployed as two projects, the front-end route can be handled well in normal mode. If the history mode is used, the request will be forwarded to the index.html entry page for processing, which is a concept similar to some MVC frameworks single entry. This is where the front-end project takes over the routing.

location / {
	try_files $uri $uri/ /index.html;
}
Copy the code

Then you have to deal with the back-end request part, or cross domain issues if it’s not the same domain. In this case, the front and back ends use the same domain name, and perform reverse proxy for API requests. Note that this section should be written above the request for forwarding.

set $node_port 8360; Location ~ ^/ API / {proxy_pass http://127.0.0.1:$node_port$request_uri; }Copy the code

Use the PM2 daemon on the back end.

Afterword.

The above is the summary of my whole project development process and some problems encountered. If you have any questions, welcome to leave a message and discuss. Finally, welcome everyone Star based on ThinkJS + Vue developed blog system.