Fairy – A front and back end separation frame

A can support before and after the separation and support a complete framework of intermediary isomorphism, maybe now it is not perfect enough, but I’ll build the framework of problems are listed, to facilitate others encounter problems no longer need to search everywhere, hope to have some help for those who build framework, the document will be regularly updated and optimization, you can watch the program at any time See the document update, also hope to finally become a complete and perfect framework, if these questions help you, please click star, thanks ~ ~

Framework name

Why is it called that?

It was a gift for a cat called Fairy Mo

Plan to do

  • Route synchronization React-router and KOA-router are synchronized
  • Template synchronization View layer synchronization, using react-DOM /server implementation
  • Data synchronization is implemented using Redux and React-Redux
  • Css-modules Synchronization Ensure that the CSS-modules generated on the front and back ends are the same
  • Webpack hot loaded component optimization

How to install

Mysql > create a Mysql database using phpMyadmin and other tools. Finally, configure the mysql database name, user name and password in server/config/db.json

npm i
npm startCopy the code

Framework advantage

  • Route synchronization (The front and back ends share the same path)
  • Template synchronization (the front and back ends share a set of templates)
  • Data synchronization (front and back ends share a set of data state machines)

Isomorphic comparison of the previous non-isomorphic loading comparison, it can be clearly seen that the white screen time is less, and the total page loading speed is faster

Nonisomorphism VS isomorphism

React, React – Router, styles, and Redux dynamic updates are also supported in front-end development

Want to build a framework like this from start to finish? Well, LET me write down the build process in as much detail as possible

architecture

For a framework, the most important thing is architecture. If we need to build a plug-in that is isomorphic to the front end and middle layer, we need it in a folder. When considering the architecture, IN order to make the environment used by the front and back ends relatively independent, THE front-end part can be extracted separately for production, and I also hope that the back-end part can be displayed and managed more clearly. We decided to divide the framework content into two parts, Cliet and Server folders respectively. For the front-end folder directory structure is relatively fixed, there is no need to consider too much, just according to the official recommended directory, distinguish View, Router and Store. As for the server side, we still use the most classic MVC architecture after consideration, so that the control layer, data layer and presentation layer can be distinguished, which is conducive to the decoupling of background business, and also conducive to our subsequent maintenance and modification and addition of new business.

The directory structure is as follows:

├── ├─ Assets ├─ │ ├─ dist │ ├─ exercises - Exercises - Exercises ├─ ├─ CSS │ ├─ ├─ CSS │ ├─ SRC │ ├─ data │ ├─ │ ├ x │ ├─ Constants Putting all the actions together for easy management │ │ ├─ reducers Redux Reducers │ │ │ ├─ ├─ CSS │ ├─ img │ ├─ js │ ├─ route │ ├─ view ├─ │ ├─ dist │ ├─ CSS │ ├─ img │ ├─ js ├─ auth authentication directory ├─ ├─ auth authentication directory ├ ─ ─ the config backend such as database configuration file storage directory ├ ─ ─ containers back-end control layer C code storage directory ├ ─ ─ models backend database control code storage directory ├ ─ ─ the route the back-end routing storage directory └ ─ ─ the view The backend page generates the coat layer storage directory. Due to interface synchronization, the backend is only responsible for the coat nesting during page generationCopy the code

preface

Why do we use intermediate isomorphism? What is an intermediate isomorphism?

Before a isomorphism, we will let the back-end output API json data, and the front end to receive the data, encapsulate and assembling, so there will be a problem, is that if the business has changed, then the interface changes, at this time the front-end and back-end workers need to modify the byte code and business logic, before the main problems are as follows:

  • 1. The front and back ends need to be developed separately and connected to each other using JSON interfaces
The back-end uses its own business to realize the splicing of business logic and JSON data, and the front-end receives JSON data to complete the conversion of data to the interface, which costs a lotCopy the code
  • 2. The problem of SEO
SEO optimization problem of asynchronous loading, no server rendering, spiders can't grab data, no SEOCopy the code

The simple reason is to reduce development costs and increase overall project maintainability, Moreover, this technology can effectively reduce the speed of web page loading and provide excellent SEO optimization capabilities. It can also take advantage of the high concurrency of Nodejs and let Nodejs deal with its best aspects, thus increasing the load capacity of the overall architecture, saving hardware costs and improving the load capacity of the project

Nodejs environment installation and configuration

Building such a framework must be done in a NodeJS environment, since NodeJS is responsible for the back-end servers, code packaging and optimization

How to install it? It’s easy to download Nodejs from Nodejs

Windows: Download the official installation package directly to install, please download the latest version, so that you can support some new features, such as async and await, etc., you will need later

Mac: do not recommend using the installation package to install a is not convenient to upgrade, switching version is not convenient, and uninstall very troublesome, Mac use BREW installation or NVM to install, guide you search it

2. Build webpack and Babel environment tools

When we finished with Nodejs, one big pitfall was the need to use some automation tools, mainly WebPack and Babel. Webpack we used version 2.0, which was also recently released

Webpack is currently the most popular front-end resource modular management and packaging tool. It can package many loose modules according to dependencies and rules into front-end resources suitable for production deployment. You can also code separate modules that are loaded on demand and load them asynchronously when they are actually needed. Through loader conversion, any form of resources can be regarded as modules, such as CommonJs module, AMD module, ES6 module, CSS, images, JSON, Coffeescript, LESS, etc.

Can see the official introduction, more is the tool is defined as a modular management and packaging tools, also is such, it is in our front end work is responsible for the function of automatic packaging, it can help us to merge automatically packaging js, remove duplicate js code, documentation such as automatic processing some style, so a stack of automation, complete all need to manually before Operational work

Babel is a conversion compiler that converts ES6 into code that can be run in a browser. Babel was created by Sebastian McKenzie, a developer from Australia. His goal was to make Babel able to handle all of ES6’s new syntaxes, and to build in the React JSX extension and support for flow-type annotations.

Babel will automatically convert the new ES6 and ES7 features we use into ES5 syntax that all browsers are compatible with, and will make your code more formal and tidy.

When I used these two tools at that time, Webpack felt that the configuration was very troublesome, and there were a lot of loaders and various configurations, which felt very troublesome, but after I got used to it, I found it was very simple

How to configure webPack2 configuration and how to make the environment support hot loading

This part because is the client environment tool configuration, will not explain particularly detailed, we can go to the official Chinese website for reading and learning, but will encounter some pits, and how to configure, we look at the webPack2 configuration is what, the current environment configuration Webpack configuration file

Why 2 configuration files? What is devServer.js?

Development environment configuration files (webpack.config.dev.js & devServer.js)

A configuration file (webpack.config.dev.js) is used for the development environment. It supports some hot loading and hot replacement functions, and it does not need to be washed to see the changes at any time. It also supports source Map support for debugging pages and scripts. And of course we need to do that and we need to run it with a server

Let’s take a look at the code snippet:

HtmlWebpackPlugin = require('html-webpack-plugin'), autoprefixer = require('autoprefixer'), precss = require('precss'), Postcsseasysprites = the require (' postcss - easysprites), / / plugin CommonsChunkPlugin = webpack.optimize.Com monsChunkPlugin load utility componentsCopy the code

The headers are basically the plug-in components that need to be used, and the comments are clearly written, so we can implement the corresponding functions by referring to these plug-ins. Let me explain them one by one

webpack = require('webpack'),Copy the code

I don’t want to talk about that

HtmlWebpackPlugin = require('html-webpack-plugin'),Copy the code

When referencing this component, we can automatically convert the template to an HTML page and automatically load the webpack-packed CSS links into the page source code

New HtmlWebpackPlugin({template: 'SRC /index.html', // address of page template, support some special templates, such as Jade, EJS, Handlebar etc inject: Minify: {removeComments:} minify: {removeComments:} minify: {removeComments: CollapseWhitespace: false}, collapseWhitespace: false} ['index','vendor','manifest'], // The name of the entry inserted in the file, note that there must be a corresponding declaration in the entry, or the chunk extracted using CommonsChunkPlugin. Filename: 'index.html' // final generated HTML filename, which can be added with path name}),Copy the code
 CommonsChunkPlugin = webpack.optimize.CommonsChunkPluginCopy the code

When webpack is packaged, if not subcontracted, it is packaged into a JS file with the name you gave it, for example:

entry: {
        index:  './src/index.js'
		}Copy the code

If we do not use this component, we will only generate an index,js file after packaging, so we still need to remember the branch packaging in order to extract some common packages. Why do we do this? Or in order to use the browser cache, you can also complete the replacement of the updated package when the project is new deployed, and the public part is not replaced, so that users do not need to download the public JS, from reducing the server or CDN pressure, right, save money ~ server free money ah, CDN does not charge ah?

How does it work? We’re not going to go straight to the code

Entrance to the configuration

entry: {
        index:
          './src/index.js',
        vendor: [
            'react',
            'react-dom',
            'redux',
            'react-redux',
            'react-router',
            'axios'
        ]
    }Copy the code

We can see that the Vendor module has written some common modules used here. In order to facilitate the plug-in configuration of these public modules, they are packaged separately. The Index module is the script we use for single page application, which is all written by ourselves

Module configuration JS

New to webpack.optimize.Com monsChunkPlugin ({names: [' vendor ', 'the manifest corresponding name] / / need to subcontract, filename: JsDir +'[name].js' // configure the output structure, which is configured to generate by path and module name}),Copy the code

Why is there a manifest? Webpack2 is used to store relationships, links, etc. If we don’t extract this module, the vendor will change after each package, and we lose the meaning of not replacing the vendor package when we replace resources. So every time the project is updated, just replace index.js and mainifest

Precss = require('precss'), precss = require('precss'), Postcsseasysprites = require(' postCSs-easysprites '); postcsseasysprites = require(' postCSs-easysprites ')Copy the code

PostCss and LESS and SASS are all preloaders of CSS. The main purpose of postCss is to make it easier and faster to write CSS and to support some programming features, such as loops and variables. In this case, we chose postCss, The reason is simple. 1. 2. The plugin supports Sass and Less functions

We have a look at webpack is how to process the file, Webpack uses the **loader(loader)** to process, with a variety of loaders for file refinement and feature execution, look at the code:

Module: {// loader configuration rules: [{test: /\. CSS $/, use: [{loader: "style-loader"}, {loader: "csS-loader ", options: { modules: true, camelCase: true, localIdentName: "[name]_[local]_[hash:base64:3]", importLoaders: 1, sourceMap: true } }, { loader: "postcss-loader", options: { sourceMap: true, plugins: () => [ precss(), autoprefixer({ browsers: ['last 3 version', 'ie >= 10'] }), postcsseasysprites({imagePath: '../img', spritePath: './assets/dist/img'}) ] } } ] }, { test: /\.css$/, exclude: [path.resolve(srcDir, cssDir)], use: [ { loader: "style-loader" }, { loader: "css-loader", options: { importLoaders: 1, sourceMap: true } }, { loader: "postcss-loader", options: { sourceMap: true, plugins: () => [ precss(), autoprefixer({ browsers: ['last 3 version', 'ie >= 10'] }), postcsseasysprites({imagePath: '../img', spritePath: './assets/dist/img'}) ] } } ] }, { test: /\.js$/, exclude: /node_modules/, use: [ { loader: "babel-loader", options: { presets: ['react-hmre'] } } ] }, { test: /\.(png|jpeg|jpg|gif|svg)$/, use: [ { loader: "file-loader", options: { name: 'dist/img/[name].[ext]' } } ] } ] },Copy the code

Very long. Let’s break it down:

CSS processing, the specific format does not say, say what it is, why do so

{test: /\. CSS $/, use: [{loader: "style-loader" // to handle basic CSS styles}, {loader: "css-loader", options: {modules: CamelCase: true,// Support CSS -modules camelCase: true,// Support - class,id localIdentName: "[name]_[local]_[hash:base64:3]",// csS-modules generation format importLoaders: 1, // Whether CSS import method sourceMap: }}, {loader: "postcss-loader", // PostCSS can load modules, you can use postCSS plug-in module options: {sourceMap: true, plugins: () => [precss(), // Support some features of Sass autoprefixer({browsers: ['last 3 version', 'ie >= 10']}),// Postcsseasysprites ({imagePath: '.. '. / assets/dist/img / / support CSS Sprite function]}})}}, {$/ test: / \. CSS, exclude: [path. Resolve (srcDir cssDir)], use: [ { loader: "style-loader" }, { loader: "css-loader", options: { importLoaders: 1, sourceMap: true } }, { loader: "postcss-loader", options: { sourceMap: true, plugins: () => [ precss(), autoprefixer({ browsers: ['last 3 version', 'ie >= 10'] }), postcsseasysprites({imagePath: '../img', spritePath: './assets/dist/img'}) ] } } ] },Copy the code

After looking at the code, we’ll focus on CSS-modules. The main reason for referring to CSS modules is to allow styles to be automatically tagged and hash, so that styles never collide. The obvious benefit of this is that you don’t have to worry about style name conflicts when you’re working together. In addition, CSS -modules can be used to reduce cascading styles and reduce parent references. Such low-level is conducive to the reuse and utilization of styles, making them more general and enhancing the reuse of styles.

The reason why 2 CSS loader modules are written here is very simple, because to prevent other plug-ins from adding CSS -modules to the module’s style, we need to add another loader so that other CSS styles not in the specified folder are not affected by CSS -modules. Exclude: [path.resolve(srcDir, cssDir)] and delete the csS-modules configuration.

Let’s look at the configuration of the output

Output: {path: assetsDir,//path represents the output path of js files filename: jsDir + '[name].js', // used to configure the output filename format publicPath: '/' // public path, used to configure the path added before all resources, the specific purpose of this path will be explained later in the build directory},Copy the code

Finally, let’s take a look at the development tool. By using this tool, source-Map can be automatically loaded, which is convenient for our debugging and development. After all, the compressed code cannot be debugged, while source-Map can restore the previous code and point to the location, which is convenient for our operation

  devtool: 'source-map',Copy the code

Because it’s a development environment, one important feature we need is react hot loading. What is hot loading? We use webpack-dev-server and React-hot-loader3 to make changes to the react script and state control of redux without refreshing the page. Can complete the page hot load and replacement function. Let’s look at the configuration method:

Create devServer.js to execute server execution and configuration and attach the react-hot-loader3 plugin with the following code:

// load Node's Path module const Path = require(' Path '); Const webpack = require('webpack'); // const express = require('express'); const WebpackDevServer = require('webpack-dev-server'); // load webpack config file const config = require('./webpack.config.dev'); Var creatServer = () => {let compiler = webpack(config); Let app = new WebpackDevServer(webpack(config), {publicPath: Config. The output. PublicPath, / / file output paths, because be are carried out in memory, so it can't see the specific file hot: true, / / whether open historyApiFallback thermal load function: {colors: true}}); {colors: true}}); Listen (5000, function(err) {if (err) {console.log(err); } console.log('Listening at localhost:5000'); }); }; // call the koA server creatServer() method;Copy the code

The webpack-dev-server is a miniature Express or KOA framework that can be used to implement a simple local server using NodeJS and support hot replace. The main thing is to check the webPack process and make the application hot loaded, but using this plugin does not complete all the hot loading effects. For example, we have problems with redux, because the hot replacement does not preserve state, so when we use it, every time we save, The React component will not retain its state, so we need to introduce another plugin to solve this problem. Let’s see how to use this plugin. There are many ways to use this plugin

Step 1: Where do you add the top 3 sentences to the entry file

entry: { index: [' react - hot - loader/patch ', 'webpack - dev - server/client? http://0.0.0.0:5000', 'webpack/hot/only - dev - server', './src/index.js' ] }Copy the code

Step 2: Add hot update plug-ins to webpack

new webpack.HotModuleReplacementPlugin(),Copy the code

Step 3: add the corresponding hot loading module in Babel

We need to add a file in the root directory. Babelrc to configure the Babel configuration

We just need to add a hot-loaded plug-in

{
    "plugins": [ "react-hot-loader/babel" ]
}Copy the code

So let’s look at the configuration of Babel

{
    "presets": ["react", "es2015", "stage-0", "stage-1", "stage-2", "stage-3"],
    "plugins": ["transform-class-properties", "transform-es2015-modules-commonjs", "transform-runtime","react-hot-loader/babel"]
}Copy the code

We use a lot of converters and add-ons in Babel, and it’s easy to install. The package we use has the following features

"Babel - cli" : "^ 6.23.0", "Babel - core" : "^ 6.6.5", "Babel - eslint" : "^ 6.1.0", "Babel - loader" : "^ 6.2.4," "Babel - plugin - transform - class - the properties" : "^ 6.11.5", "Babel - plugin - transform - es2015 modules -- commonjs" : "^ 6.23.0", "Babel - plugin - transform - react - JSX" : "^ 6.23.0", "Babel - plugin - transform - the require - ignore" : "^ hundreds," "Babel - plugin - transform - runtime" : "^ 6.23.0", "Babel - polyfill" : "^ 6.23.0", "Babel - preset - es2015" : "^ 6.3.13 Babel - preset -", "react" : "^ 6.3.13", "Babel - preset - react - hmre" : "^ 1.1.1", "Babel - preset - stage - zero" : "^ 6.22.0", "Babel - preset - stage - 1" : "^ 6.22.0", "Babel - preset - stage - 2" : "^ 6.13.0", "Babel - preset - stage - 3" : "^ 6.22.0 Babel -", "register" : "^ 6.23.0", "Babel - runtime" : "^ 6.23.0",Copy the code

Finally, let’s take a look at the generated WebPack configuration file

ExtractTextPlugin = require('extract-text-webpack-plugin'),Copy the code

When WebPack packages code, you can see that the styles are generated directly on the page, so if you want to reference the styles in a single file, you need to use this plugin. When you use this plugin, you can make the page reference CSS in link mode. For some large styles, it is better to store CSS styles in the browser cache to reduce the load of server data. The configuration is as follows:

new ExtractTextPlugin('dist/css/style.css'),Copy the code

Here we have compressed all styles into a style.css file, but of course you can do it separately

new ExtractTextPlugin(cssDir + '[name].css'),Copy the code
/ / load JS compile the plug-in module compression UglifyJsPlugin = webpack. Optimize. UglifyJsPlugin,Copy the code

Load the compression module, which can compress JS into the most simplified code and greatly reduce the generated file size. The configuration is as follows:

New UglifyJsPlugin({// the most compact output beautify: false, // remove all comments comments: false, compress: {// UglifyJs does not output warnings: false if it deletes code that is not used, // Remove all 'console' statements // Also compatible with Ie drop_console: Collapse_vars: true, // Inline variable collapse_vars: true, // Static values that appear multiple times but are not defined as variables to reference reduce_vars: true,}}) collapse_vars: trueCopy the code

OK, we have finished the configuration of Webpack and Babel, and then we can start the development. This part is mainly about the lack of materials. Now we have the Chinese official website, it is better.

React quote

React as the core implementation framework, the main black technology is also based on the methods provided by react. Let’s take a look at the life cycle of React

ReactJS life cycle

The life cycle of ReactJS can be divided into three phases: instantiation, existence, and destruction

instantiation

First instantiation

  • getDefaultProps
  • getInitialState
  • componentWillMount
  • render
  • componentDidMount

GetDefaultProps => props => state => mount => Render => Mounted

pol

When a component already exists and its state changes

  • componetWillReceiveProps
  • shouldComponentUpdate
  • ComponentWillUpdate
  • render
  • componentDidUpdate

ReceiveProps => shouldUpdate => Update => render => updated

Destruction of period

componentWillUnmount

The role of the 10 apis in the life cycle

  • GetDefaultProps is called only once on the component class, and returns the object used to set the default props, for reference values, which are shared in the instance

  • GetInitialState works on component instances and is called once when the instance is created to initialize the state of each instance, at which point you can access this.props

  • ComponentWillMount is called before the first rendering is done, at which point the state of the component can be modified

  • Render Mandatory method to create a virtual DOM with special rules:

Only data can be accessed through this.props and this.state, which can return null, false, or any React component. Only one top-level component can appearCopy the code
  • ComponentDidMount Called after the real DOM has been rendered. You can access the real DOM element in this method by calling this.getDomNode (). At this point, you can manipulate the DOM using other class libraries. The server will not be called

  • ComponetWillReceiveProps Is called when the component receives new props and uses it as the nextProps parameter. In this case, you can change the props and state of the component

  • ShouldComponentUpdate whether the component should render new props or state, returning false to skip subsequent lifecycle methods, is generally not needed to avoid bugs. Is a point of optimization when application performance bottlenecks occur.

  • ComponetWillUpdate is called before rendering after receiving new props or state. Updates to props or state are not allowed

  • ComponetDidUpdate is called after rendering the new props or state, at which point DOM elements can be accessed.

  • Called before the componetWillUnmount component is removed, it can be used to do some cleanup, and all tasks added in the componentDidMount method need to be undone in this method, such as timers created or event listeners added.

var React = require("react"); var ReactDOM = require("react-dom"); var NewView = React.createClass({ //1. GetDefaultProps :function() {console.log("getDefaultProps"); return {}; }, //2. Instantiation phase getInitialState:function() {console.log("getInitialState"); return { num:1 }; }, / / render before call, business logic should be here, such as to the operation of the state componentWillMount: function () {the console. The log (" componentWillMount "); }, // render and return a virtual DOM render:function() {console.log("render"); return( <div> hello <strong> {this.props.name} </strong> </div> ); }, // This method happens after the render method. In this method, the ReactJS will use render returns generated virtual DOM object to create a true DOM structure componentDidMount: function () {the console. The log (" componentDidMount "); } / / 3. Update stage componentWillReceiveProps: function () {the console. The log (" componentWillReceiveProps "); } / / do you need update shouldComponentUpdate: function () {the console. The log (" shouldComponentUpdate "); return true; } / / will update can not update the state and props in the method componentWillUpdate: function () {the console. The log (" componentWillUpdate "); } / / update complete componentDidUpdate: function () {the console. The log (" componentDidUpdate "); }, / / 4. Destroy stage componentWillUnmount: function () {the console. The log (" componentWillUnmount "); }, // Handle the click event handleAddNumber:function() {this.setprops ({name:"newName"}); }}); ReactDOM.render(<NewView name="ReactJS"></NewView>, document.body);Copy the code

Let’s talk about the dark technology called React isomorphism

The official website provides us with two methods for the server to render the page

  • React. RenderToString converts the React element into an HTML string. Since reactid has already been rendered on the server side, it is rendered again on the browser side. This results in a high performance first page load! Isomorphic dark magic comes primarily from this API.

  • The React. RenderToStaticMarkup this API is equivalent to a simplified version of the renderToString, if your application is basically static text, suggest using this method, a large number of less reactid, natural to streamline the DOM tree, Save some traffic on IO stream transmission.

When renderToString is used, the background rendering will generate an HTML string with an identifier. When the current page reads JS, it will determine that if the HMTl string has an identifier, the page will not be rendered. Instead, it will only bind events, saving the rendering work of the React script.

Therefore, when the user refreshes the page, the backend will render the page and send the HTML string to the front end for practical binding. If the user clicks the switch link inside React, react will render and fill the page. So when you don’t refresh the page, you can switch pages in seconds. When the user refreshes, it does not need to wait for JS loading to complete to display the interface, but directly display the interface effect. That’s one of the great advantages of isomorphism

Now that we understand how this works, we also need to look at how do we synchronize the front and back interfaces

So how do we do that

Our interface layer is very simple. We create a file in the Client/View folder to write the React component as follows:

"use strict"; import React, {Component} from 'react'; import ReactDOM, {render} from 'react-dom'; import Nav from '.. /view/nav.js'; import LoginForm from '.. /view/components/login_form'; import logo_en from '.. /dist/img/text_logo.png'; import logo_cn from '.. /dist/img/text_logo_cn.png'; import '.. /dist/css/reset.css'; import Login from '.. /dist/css/login.css'; import Style from '.. /dist/css/style.css'; class App extends Component { constructor(props) { super(props); // initialize method, Render () {return (<div> <Nav/> <div className={login.banner}> <p className={Login.text_logo}> <img width="233" src={logo_en}/> </p> <p className={Login.text_logo_cn}> <img width="58" SRC ={logo_cn}/> </p> </div> <LoginForm/> <div className={login. form_reg}> No account? ListenLite</a> </div> </div>); }}; export default App;Copy the code

The above method is very simple, how does the back-end use the corresponding method to use server rendering? One is very simple, let’s look at the code:

"use strict"; import React from 'react'; import {renderToString, renderToStaticMarkup} from 'react-dom/server'; import {match, RouterContext} from 'react-router'; import {layout} from '.. /view/layout.js'; import {Provider} from 'react-redux'; import bcrypt from 'bcrypt'; import jwt from 'jsonwebtoken'; import passport from 'koa-passport'; import routes from '.. /.. /client/src/route/router.js'; import configureStore from '.. /.. /client/src/store/store.js'; import db from '.. /config/db.js'; import common from '.. /.. /common.json'; const User = db.User; //get page and switch json and html export async function index(ctx, next) { console.log(ctx.state.user, ctx.isAuthenticated()); if (ctx.isAuthenticated()) { ctx.redirect('/'); } switch (ctx.accepts("json", "html")) { case "html": { match({ routes, location: ctx.url }, (error, redirectLocation, renderProps) => { if (error) { console.log(500) } else if (redirectLocation) { console.log(302) } else if (renderProps) { //iinit store let loginStore = { user: { logined: ctx.isAuthenticated() } }; const store = configureStore(loginStore); ctx.body = layout(renderToString( <Provider store={store}> <RouterContext {... renderProps}/> </Provider> ), store.getState()); } else { console.log(404); } }) } break; Case "json": {let callBackData = {'status': 200, 'message': 'this is the login page ', 'data': {}}; ctx.body = callBackData; } break; default: { // allow json and html only ctx.throw(406, "allow json and html only"); return; }}};Copy the code

First create a controller in the Server/Containers folder that corresponds to the page above, and then import the required methods

import {renderToString, renderToStaticMarkup} from ‘react-dom/server’;

Here at this time we will be able to use the background rendering, we can see that the increased rendering this part, we can directly apply colours to a drawing of the react directly to the HTML code, there is no detail, then the comprehensive other homogeneous part explain in detail here, here, you just need to understand the implementation method

renderToString( <Provider store={store}> <RouterContext {... renderProps}/> </Provider> ), store.getState()Copy the code

The generated string looks like this

<div data-reactroot="" data-reactid="1" data-react-checksum="978259924"><ul class="style_nav_2Lm" data-reactid="2"><li "Class =" style_fl_10U data - reactid = "3" > < a href = "/" data - reactid = "4" > first page < / a > < / li > < li class = "style_fl_10U" Data - reactid = "5" > < a href = "/ 404" data - reactid = "6" > string flow < / a > < / li > < li data - reactid = "7" > < a href = "/" data - reactid = "8" > < I class="style_logo_2Hq" data-reactid="9"></i></a></li><li class="style_login_visable_2GR" data-reactid="10"><img src="/dist/img/user_1.png" data-reactid="11"/><dl data-reactid="1 2"><a href="/" data-reactid="13"><dt Data - reactid = "14" > my homepage < / dt > < / a > < a href = "/" data - reactid = "15" > < dt data - reactid = "16" > I will upload < / dt > < / a > < a href = "/ logout" Data - reactid = "17" > < dt data - reactid = "18" > out of < / dt > < / a > < / dl > < / li > < li class = "style_fr_Bxu" data - reactid = "19" > < a href = "/ reg" Data - reactid = "20" > < b data - reactid = "21" > - < / b > < / a > < / li > < li class = "style_ fr_Bxu" data - reactid = "22" > < a href = "/ login" Data-reactid ="23"> log </a></li></ul><div class="login_banner_eub" data-reactid="24"><p class="login_text_logo_3fN" data-reactid="25"> <img width="233" src="/dist/img/text_logo.png" data-reactid="26"/></p><p class="login_text_logo_cn_iYZ" data-reactid="27"><img width="58" src="/dist/img/text_logo_cn.png" data-r eactid="28"/></p></div><form data-reactid="29"><div class="login_tips_1nU" data-reactid="30"></div><ul class="login_form_HMj" data-reactid="31"><li data-reactid="32"><i class="l ogin_segmentation_eZc" Data - reactid = "33" > < / I > < / li > < li data - reactid = "34" > < b data - reactid = "35" > login to ListenLite < / b > < / li > < li Class =" login_form_BORDER_3hw "data-reactid="36"><input type="text" name="username" value=" placeholder=" placeholder" data-reactid="37"/></li><li class="login_form_pw_2rP" data-reactid="38"><input type="password" name="password" value="" Placeholder =" placeholder "; placeholder=" placeholder "; placeholder=" placeholder "; placeholder=" placeholder "; placeholder=" placeholder "; placeholder=" placeholder="; class="login_remmber_input_28B" id="remmberPw" data-re actid="41"/><label for="remmberPw" class="login_remmber_pw__H2" Data-reactid ="42"> Remember password </label></li><li data-reactid="43"><button class="login_form_submit_2A1" disabled="" TY PE = "submit" data - reactid = "44" > login < / button > < / li > < / ul > < / form > < div class = "login_form_reg_32l" data - reactid = "45" > <! -- react-text: 46 --> <! - / react - text - > < a href = "#" d ata - reactid = "47" > register now ListenLite < / a > < / div > < / div >Copy the code

Here we can see that we are rendering the page component directly, but there is no index.js layer on the client side. We need to write a document layer on the back side to wrap the generated code, so we can see a layout.js file in the server/ View

'use strict'; import common from '.. /.. /common.json'; exports.layout = function(content, data) { return ` <! DOCTYPE html> <html> <head> <meta charSet='utf-8'/> <meta httpEquiv='X-UA-Compatible' content='IE=edge'/> <meta name='renderer' content='webkit'/> <meta name='keywords' content='demo'/> <meta name='description' content='demo'/> <meta name='viewport' content='width=device-width, initial-scale=1'/> <link rel="stylesheet" href="/dist/css/style.css"> </head> <body> <div id="root"><div>${content}</div></div> <script> window.__REDUX_DATA__ = ${JSON.stringify(data)}; </script> <script src="${common.publicPath}dist/js/manifest.js"></script> <script src="${common.publicPath}dist/js/vendor.js"></script> <script src="${common.publicPath}dist/js/index.js"></script> </body> </html> `; };Copy the code

The simple thing here is to fill the coat with the generated content

4. Synchronize front-end and back-end routes

Now that we understand the React lifecycle and see how servers use the official 2 methods for server rendering, let’s take a look at how to structure common routes for both the front and back ends

What is a React-router? Take a look at the official introduction

React Router is the complete React routing solution

The React Router keeps the UI and URL in sync. It has a simple API and powerful features such as code buffering, dynamic route matching, and establishing the correct location transition processing. The URL should be your first thought, not an afterthought.

The browser sends requests to the background server, and then the background feeds back the content. Now the React-Router takes over, and the redirect is implemented in the front end. Why can this be implemented? Thanks to HTML’s new History API

HTML5 new history API can achieve no refresh change address bar links, with AJAX can achieve no refresh jump. Simply put: Assuming the current page is renfei.org/, execute the following JavaScript statement: window.history.pushState(null, NULL, "/profile/"); The address bar then changes to renfei.org/profile/, but the browser doesn't refresh the page or even detect if the target page exists.Copy the code

So what do we do with a React-router in an intermediate isomorphism? Of course, it is done for the synchronization of routes between the front and back ends

  • When a user accesses a page for the first time, the server processes the route and outputs the related page content
  • The client user clicks a link to jump to, and the client route processes, renders and displays the related components
  • The user refreshes the page after the front-end jump, which is intercepted by the server route and rendered by the server to return * page content

Let’s look at code:

The front-end route Settings are as follows:

const Routers = ( <Router history={browserHistory}> <Route path="/" component={Home}/> <Route path="/user" component={User}/> <Route path="/login" component={Login}/> <Route path="/reg" component={Reg}/> <Route path="/logout" component={Logout}/> <Route path="*" component={Page404}/> </Router> export default Routers; ) ;Copy the code

And you can see that we’re using browserHistory, which is a new feature of HTML5, but requires browsers, and not supported by IE6-8, and hashHistory, The difference is that the hash method adds the form site.com/#/index to the link, so that browser history can remember each page switch, and also uses the anchor feature

Take a look at the configuration snippet on the server side:

const router = new Router(); //Index page route router.get('/', require('.. /containers/index.js').index); //404 page route router.get('/user', require('.. /containers/user.js').index); router.get('/get_user_info', require('.. /containers/user.js').getUserInfo); //User page route router.get('/404', require('.. /containers/404.js').index); //Login page route router.get('/login', require('.. /containers/login.js').index); router.post('/login', require('.. /containers/login.js').login); router.get('/logout', require('.. /containers/login.js').logout); //Reg page route router.get('/reg', require('.. /containers/reg.js').index); router.post('/reg_user', require('.. /containers/reg.js').reg); router.post('/vaildate_user', require('.. /containers/reg.js').vaildate_user); router.post('/vaildate_email', require('.. /containers/reg.js').vaildate_email); //set a router module.exports = router.routes()Copy the code

Our server uses KOA2, so the attached route is also the corresponding KOA-Router. Since the background architecture is MVC structure, let’s take a look at the code snippet of the controller layer read by the route

import routes from '.. /.. /client/src/route/router.js'; export async function index(ctx,next) { console.log(ctx.state.user,ctx.isAuthenticated()); switch (ctx.accepts("json", "html")) { case "html": { match({ routes, location: ctx.url }, (error, redirectLocation, renderProps) => { if (error) { console.log(500) } else if (redirectLocation) { console.log(302) } else if (renderProps) { //iinit store let loginStore = {user:{logined:ctx.isAuthenticated()}}; const store = configureStore(loginStore); console.log(store.getState()); ctx.body = layout(renderToString( <Provider store={store}> <RouterContext {... renderProps}/> </Provider> ), store.getState()); } else { console.log(404); } }) } break; Case "json": {let callBackData = {'status': 200, 'message': 'this is home ', 'data': {}}; ctx.body = callBackData; } break; default: { // allow json and html only ctx.throw(406, "allow json and html only"); return; }}};Copy the code

The Match method of the React – Router is used here. This method automatically reads the route file of the front end and feeds back the module code through the module that matches the path. The React server renders the module directly

Here you can see the design of the route. Koa2 is used in the back end, so you can judge the type of the request, which makes full use of the advantages of links. You can request the same address, because the request type is different, determine whether it is HTML or JSON, feedback different data structure, so as to achieve the rich application of route

V. Synchronization of front-end state control and back-end state control

What is the story

Redux provides a one-way data flow similar to Flux, maintains only one Store throughout the application, and is functionally oriented to make it user-friendly for server-side rendering support.

The official advice is, if you don’t need to use it, don’t use it, but I’ve found in development that when you use Redux and your application is complicated, it really works. As we know, when we need to modify the React state or interface, we do not directly operate the DOM structure, but only operate the state, so as to update the DOM. However, once the program becomes complicated, it cannot be maintained and modified, which will be very complicated. With Redux, the states of all components are uniformly controlled on the top layer, thus reducing the state interaction of each component, reducing the coupling of the program, and increasing the maintainability of the program.

Redux is really difficult to understand at the beginning of use, especially its Flux architecture, but after a careful look at the official document, it is actually very simple data processing. Here, all data state processing is operated by action and Reducer, and data manipulation is not allowed elsewhere. Thus the consistency of data is guaranteed. And you can save all kinds of states, so that you can return to any state before, which is really very refreshing for these complex applications, even some game applications.

Let’s see how redux and React-Redux can unify states

Server binding entry page code:

let store = configureStore(window.__REDUX_DATA__); Const renderIndex = () => {render((<div> <Provider store={store}> {* *} {routes} </Provider> </div>), document.getelementById ('root'))}; renderIndex(); store.subscribe(renderIndex); {* here the component binding listens for events and modifies the uniform store when the state changes *}Copy the code

In order to synchronize the status on the server and ensure that the state read on the page is the latest state when the user refreshes the page, the client creation method is used to create the store

server/containers/login.js 

// Import configureStore from '.. /.. /client/src/store/store.js'; Let loginStore = {user: {logined: ctx.isauthenticated ()}}; const store = configureStore(loginStore); Body = Layout (renderToString(<Provider store={store}> <RouterContext {... renderProps}/> </Provider> ), store.getState());Copy the code

Finally, we pass the state to a Window object store, so as to directly go to the front page, and read the state to generate the corresponding interface effect, as shown in the code:

server/view/layout.js

<script>
  window.__REDUX_DATA__ = ${JSON.stringify(data)};
 </script>Copy the code

6. Server selection – KOA2

So now that we’ve basically seen the use of the three isomorphisms, why we’re doing it and how we’re doing it, let’s look at the server side architecture and some of the components that we’re using, okay

I used it on the server framework side, I chose KOA2 in Express and KOA, the reason why I chose it is simple, it is lighter architecture, better middleware mechanism and strong performance, it is written in ES6 standard, it is very friendly for me to use the new features of ES6.

To start a server, we create an app.js file in the root directory, and then write the corresponding code to create a KOA server

const Koa = require('koa');
const app = new Koa();

// response
app.use(ctx => {
  ctx.body = 'Hello Koa';
});

app.listen(3000);Copy the code

For other koA related documents, please refer to the official Chinese document, which lists the various middleware used here

7. Introduction and usage of some KOA middleware

  router = require('koa-router')(),Copy the code

The necessary middleware of KOA, through the use of this component, you can set the back-end route on the server side, complete the server state of different requests (GET,POST,DELETE,PUT, etc.) by setting the route, and return the content of the body of the request

logger = require('koa-logger'),Copy the code

Koa server record plug-in, can output all kinds of request error information output, mainly used for debugging and monitoring the server state

  bodyParser = require('koa-bodyparser')Copy the code

This plug-in needs to be mentioned that WHEN I used the form request, KOA could not get the corresponding form information, so I need to reference the middleware used by this component to parse the body. For example, if you pass the form, JSON data, or upload a file through post, it is not easy to get in KOA. After koA-bodyParser is parsed, this. Body is retrieved directly from KOA.

Database operation Sequelize is used to operate multiple databases. It also uses the same operation method as MongoDB, which is very convenient to use. For details, see Server/Models

Verify user permissions for passport

For authentication, we chose Nodejs’ most common permission authentication component, which also supports OAuth, OAuth2 and OpenID standard logins

Development BUG Diary

Github.com/aemoe/fairy…

agreement