The content of this article is my humble view of the micro front end and a summary of the technical implementation of a micro front end framework recently written. Microcosmos: a micro front – end framework written to play

Thank you for your star, pr is certainly more welcome

What is the micro front end

I first heard about the concept of a microfront about a year ago when I came across a post on Meituan’s tech blog: Building single-page applications with a microfront. At the time, I didn’t even know what a single page app was. The concept of a microfront is thought to have been introduced by ThoughtWorks in 2016. After four years of rapid development, we have seen many excellent open source works such as Single-SPA, Qiankun, Icestark, Micro Frontends, etc.

So what is a micro front end? A better question to ask is: Why do we need a micro front end?

You may not know about microfrontend, but you should know about microservices.

Wikipedia explains it this way:

Microservices are a variation of the service-oriented Architecture (SOA) architectural style, a software development technique, that constructs applications as a set of loosely coupled services. In micro service system structure, service is fine grained, agreement is a lightweight micro service is a kind of service design concept is given priority to with business functions, each service with independent operation of the business functions, opening to the outside world without being limited by the language of the API (HTTP) is the most commonly used, the application is composed of one or more micro service.

To put it bluntly, the emergence of micro services is mainly to solve a series of problems caused by too large and complex single application. The same goes for the micro front end. When we found that the traditional SPA gradually evolved into a boulder application in continuous iterations, the development, deployment and maintenance of the application became extremely difficult. We urgently needed a way to split the front-end applications in order to break down the complexity.

Or pure cent long must close close long must be divided?

And I’m sure you have another idea in mind at this point, componentization. What’s the difference between a microfront-end and componentized development? And componentization? I think they are all designed in the same way, including microservices. In the past, we came up with the concept of componentized development, but it’s not enough for our current expectations. Of course, the main purpose of componentization is to achieve better reusability and maintainability, similar to microfronds. But the granularity of its application split is component. The micro front end is the decomposition of the front-end application into sub-applications that can be developed, tested, and deployed independently, but still a cohesive single product from the user’s point of view. The granularity is app, and because of independent development, we expect the technology stack to be independent, which is very important. I don’t have the experience to talk much about this, but this article by the developers of Qiankun is a good answer to why stack independence is so important in the micro front end. Core values of the micro front end

What is the ideal microfront end? I quite agree with the view of harmony that the sub-engineer does not know that he or she is working as a sub-engineer. Interapp communication is possible, though, or parent-child communication wouldn’t always be emphasized.

To achieve our vision, we need to integrate multiple independent front-end applications, and there are certainly many ways to do that.

From the front end, there are two main types. Build time integration and run time integration.

Build-time integration, which is code splitting. What does this mean? We can develop different apps together, configure multiple portals for Webpack, and finally package and generate multiple exit files to achieve code segmentation. It just seems to work for now, but it’s not going to sandbox, and you’re still not independently developing, independently deploying.

There are two main scenarios for runtime integration. One, I’m sure you all know, is iframe. In fact, regardless of the user experience, I think iframe is a perfect microfront-end solution. But there’s no way, the problem with iframe is that we can’t prioritize it. For example, iframe is reloaded every time, is not compatible on mobile, and requires the help of the server, otherwise there will be cross-domain problems.

Here, we’ll talk about another approach, which is to implement a container that hosts the main application and implement the microfront end by registering child applications in the main application.

Here is a micro front end demo I wrote using Microcosmos. The main application contains a Vue app and a React app.

I think you already know what a microfront is. Next, let’s look at the technical implementation of microcosmos.

Microcosmos implementation

The overall architecture

The picture below is the architecture of microcosmos. The overall architecture is very simple and you can see how satisfied I am with the implementation of the various parts by the corresponding expressions. The following are introduced respectively.

The relevant API

The introduction of

npm i microcosmos

import { start, register,initCosmosStore } from 'microcosmos';
Copy the code

Register child applications

register([
  {
    name: 'sub-react'.entry: "http://localhost:3001".container: "sub-react".matchRouter: "/sub-react"
  },
  {
    name: 'sub-vue'.entry: "http://localhost:3002".container: "sub-vue".matchRouter: "/sub-vue"}])Copy the code

start

start()
Copy the code

Active application routing mode

function App() {
 function goto(title, href) {
  window.history.pushState(href, title, href);
 }
 return (
    <div>
   <nav>
    <ol>
     <li onClick={(e)= >Goto ('sub-vue', '/sub-vue')}> subapplication one</li>
     <li onClick={(e)= >Goto (' react', '/sub-react')}</li>
    </ol>
   </nav>
      <div id="sub-vue"></div>
      <div id="sub-react"></div>
  </div>)}Copy the code

Child applications must export lifecycle hook functions

Bootstrap, mount, unmount

export async function bootstrap() {
  console.log('react bootstrap')}export async function mount() {
  console.log('react mount')
  ReactDOM.render(<App />.document.getElementById('app-react'))}export async function unmount() {
  console.log('react unmout')
  let root = document.getElementById('sub-react');
  root.innerHTML = ' '
}

Copy the code

Global state communication/storage

There are scenarios for communication between applications, but in most cases the data volume is small and the frequency is low, so the global Store design is also very simple.

In the main application:

  • InitCosmosStore: initializes the store

  • SubscribeStore: Listens for store changes

  • ChangeStore: Distribute new values to store

  • GetStore: obtains the current store snapshot

let store = initCosmosStore({ name: 'chuifengji' })

store.subscribeStore((newValue, oldValue) = > {

  console.log(newValue, oldValue);

})

store.changeStore({ name: 'wzx' })

store.getStore();

Copy the code

In sub-applications:

export async function mount(rootStore) {

  rootStore.subscribeStore((newValue, oldValue) = > {
    console.log(newValue, oldValue);
  }
  
  rootStore.changeStore({ name: 'xjp' }).then(res= > console.log(res))
  
  rootStore.getStore();
  
  instance = new Vue({
    router,
    store,
    render: h= > h(App)
  }).$mount('#app-vue')}Copy the code

html-loader

Html-loader is to obtain the information of app by obtaining the HTML of the page. A relative method is jS-loader. The coupling between JS-loader and its sub-application is higher, and the sub-application should agree with the main application that the bearing container is not.

So how does htMl-loader work? In fact, it is very simple, through the application entry address, such as: http://localhost:3001, and call the fetch function. Once we have the HTML text format information, we need to take the part we need from it and mount it to the child application bearer point. The following diagram shows the element structure of the micro-front-end demo above. You can see that the child app is attached under the tag with the ID sub-React.

How do you do that?

I think your first instinct would be to use regex. I started with regex, too, but later I found that regex were too difficult to complete (forgive my regex blindness) and I could always write examples where my own regex led to the wrong result. And with the re to write, the code looks really messy, late maintenance is not too convenient. Since it’s an HTML string, why don’t we use the DOM API to handle it? The first response is to create an iframe and load the iframe with the SRC attribute. The question is, how do I know when the iframe is loaded? Onload? Obviously not, we’re just trying to get the data out. DOMContentLoaded? If you write a ready function like this, DOMContentLoaded will wait for the js to execute before calling back. That may be a little long for a SPA.

function iframeReady(iframe: HTMLIFrameElement, iframeName: string) :Promise<Document> {
    return new Promise(function (resolve, reject) {
        window.frames[iframeName].addEventListener('DOMContentLoaded'.() = > {
            let html = iframe.contentDocument || (iframe.contentWindow as Window).document;
            resolve(html);
        });
    });
}
Copy the code

We have no choice but to find another way to write a timing function to determine whether there is a body node in the DOM. By appropriately adjusting the execution period of the timing function, it seems possible, but we have no way to know the structure of the child application, relying on the body is still not reliable.

function iframeReady(iframe: HTMLIFrameElement) :Promise<Document> {
    return new Promise(function (resolve, reject) {(function isiframeReady() {
            if (iframe.contentDocument.body || (iframe.contentWindow as Window).document.body) {
                resolve(iframe.contentDocument || (iframe.contentWindow as Window).document)
            } else {
                setInterval(isiframeReady, 10)}})()})}Copy the code

And to get the contentWindow of the iframe you need to hang the iframe in the DOM. Yes, you could set it to display: None, but it’s too inelegant. It doesn’t feel right.

srcdoc? A good choice, but IE does not support this new property.

Combine regex with the DOM API. We get the content under the head and body nodes using the re, which is fairly easy to complete, and innerHtml them into a div node out of createElement, iterating through the DOM API. The DOM structure is stable, and we can easily and reliably obtain the content we want, namely HTML structure information and JS.

Js isolation

Micro front end sandbox no perfect practice?

Since there are multiple independently developed applications in the micro front end, it is natural to isolate JS by building a sandbox. In the browser, the sandbox isolates the operating system from the browser rendering engine, limiting access to and modification of operating system resources by processes. In fact, if we need an app that needs to execute some external JS with low trust, you need a sandbox. In general, when we talk about sandboxes, we emphasize two layers, isolation and security. The JS sandbox itself is a pretty big pit, but in most cases code security is not a microfront-end concern, and it is rare for a host application to be unable to trust the child application it is accessing. The sandbox in the micro front end is the first layer to consider, where complete isolation is problematic.

Without global objects, without DOM and BOM, what we have to do is very simple. With new Function, variables between applications run in Function scope, so there’s no conflict, but we still have to think about global variables, DOM and BOM. In particular, most of those frameworks have changed native objects. So how do we implement window isolation?

There are three main ideas:

Snapshot sandbox:

The snapshot sandbox is actually activated to generate snapshots when mount is applied and deactivated to restore the original environment when unmount. For example, if APP A changes A global variable window.appName = ‘vue’ when it is mounted, I can record the current snapshot (property value before modification). When APP A is uninstalled, I can compare the current snapshot with the current environment to obtain the original environment and restore the operating environment.

class SnapshotSandbox {
    constructor() {
        this.proxy = window; 
        this.modifyPropsMap = {}; // Which attributes are modified
        this.active();
    }
    active() {
        this.windowSnapshot = {}; // Snapshot of the window object
        for (const prop in window) {
            if (window.hasOwnProperty(prop)) {
                // Take a picture of the properties on the window
                this.windowSnapshot[prop] = window[prop]; }}Object.keys(this.modifyPropsMap).forEach(p= > {
            window[p] = this.modifyPropsMap[p];
        });
    }
    inactive() {
        for (const prop in window) { / / the diff
            if (window.hasOwnProperty(prop)) {
                // Compare the last photo with the window property
                if (window[prop] ! = =this.windowSnapshot[prop]) {
                    // Save the modified result
                    this.modifyPropsMap[prop] = window[prop]; 
                    / / window
                    window[prop] = this.windowSnapshot[prop]; 
                }
            }
        }
    }
}

let sandbox = new SnapshotSandbox();
((window) = > {
    window.a = 1;
    window.b = 2;
    window.c = 3
    console.log(a,b,c)
    sandbox.inactive();
    console.log(a,b,c)
})(sandbox.proxy);
Copy the code

The idea of the snapshot sandbox is simple and it is easy to maintain the state of the child application, but obviously the snapshot sandbox can only support single-instance scenarios, not multi-instance scenarios.

Use the iframe:

Oh, that’s — that’s not cool. Just kidding, iframe is also hard to do, even though it allows you to get completely isolated window, document, and other contexts. But you can’t use it directly, you have to set up a communication between the iframe and the main application via postMessage. Or the routing guy will play with a hammer.

The proxy agent:

class ProxySandbox {
    constructor() {
        const rawWindow = window;
        const fakeWindow = {}
        const proxy = new Proxy(fakeWindow, {
            set(target, p, value) {
                target[p] = value;
                return true
            },
            get(target, p) {
                returntarget[p] || rawWindow[p]; }});this.proxy = proxy
    }
}
let sandbox1 = new ProxySandbox();
let sandbox2 = new ProxySandbox();
window.a = 1;
((window) = > {
    window.a = {a:'ldl'};
    console.log(window.a) })(sandbox1.proxy); a:'ldl'
((window) = > {
    window.a = 'world';
    console.log(window.a)
})(sandbox2.proxy);
Copy the code

The above proxy is very simple, read priority to obtain “copy value”, no proxy to the original value, write proxy to “copy value”. There are a number of problems with it, and despite all the malicious code, if the global object uses self, this, globalThis, the proxy is invalid, and just proxy GET and set are not enough. Most importantly, global variables are partially isolated. The window’s native objects and methods are completely invalidated.

function getOwnPropertyDescriptors(target: any) {
    const res: any = {}
    Reflect.ownKeys(target).forEach(key= > {
        res[key] = Object.getOwnPropertyDescriptor(target, key)
    })
    return res
}


export function copyProp(target: any, source: any) {
    if (Array.isArray(target)) {
        for (let i = 0; i < source.length; i++) {
            if(! (iintarget)) { target[i] = source[i]; }}}else {
        const descriptors = getOwnPropertyDescriptors(source)
        //delete descriptors[DRAFT_STATE as any]
        let keys = Reflect.ownKeys(descriptors)
        for (let i = 0; i < keys.length; i++) {
            const key: any = keys[i]
            const desc = descriptors[key]
            if (desc.writable === false) {
                desc.writable = true
                desc.configurable = true
            }
            if (desc.get || desc.set)
                descriptors[key] = {
                    configurable: true.writable: true.enumerable: desc.enumerable,
                    value: source[key]
                }
        }
        target = Object.create(Object.getPrototypeOf(source), descriptors)
        console.log(target)
    }
}

export function copyOnWrite(draftState: {
    originalValue: {
        [key: string] :any;
    };
    draftValue: any;
    onWrite: any;
    mutated: boolean;
}) {
    const { originalValue, draftValue, mutated, onWrite } = draftState;
    if(! mutated) { draftState.mutated =true;
        if(onWrite) { onWrite(draftValue); } copyProp(draftValue, originalValue); }}Copy the code

The reason it’s hard to sandbox is because of the paradox that we want to be as isolated as possible, but you shouldn’t be completely isolated. And between that line, there’s a conflict.

The microcosmos sandbox is implemented using a proxy. Currently, copy-on-write is used to copy some objects in the window section. The window method is still bind to the original method, which is really not a good idea. If you are afraid of causing conflicts, you can add a blacklist and whitelist to restrict access to certain methods by sub-applications, or simulate some methods and then communicate with them. Either way, it’s not elegant.

I am not happy with the implementation of this section, the code is from Immer, there is so much to learn from this library.

If you’re interested, do your own research: IMmer

CSS isolation

Do we need to isolate CSS in our microfront-end container design?

In fact, I personally do not think this is the content to consider the micro front-end container, because this problem has nothing to do with the micro front-end, almost since CSS, we have encountered such problems, SPA era has become a must consider the issue. So I didn’t solve the CSS isolation problem in Microcosmos. You’ll still need to use the same Block Element Modifier (BEM) as you would for a SPA, specifying project prefixes, CSS Modules, CSS-in-JS, etc.

The Dynamic Stylesheet mentioned by Qiankun is actually quite boring (I added HH myself). Loading and unloading of sub-applications naturally includes loading and unloading of CSS, but this does not guarantee that there is no conflict between sub-applications and main applications, let alone that there may be multiple sub-applications in parallel. (Of course, they’re coming up with other proposals now, so watch!)

So you might say, Why not shadow dom?

Shadow DOM is indeed inherently isolated, and many of our open source component libraries use shadow DOM. But it’s still too risky to hang an entire application in shadow DOM. All kinds of problems can arise.

For example, prior to Act17, all events were brokered to the Document element in order to reduce the number of event objects in the DOM to save memory, optimize page performance, and implement the event scheduling mechanism. The event triggered in the Shadow DOM will only receive the host element when the outer layer receives the event.target, which causes the react event scheduling problem.

If you don’t know React, let me explain.

React’s compositing event mechanism doesn’t bind events directly to specific DOM elements. Instead, events are managed through a ReactEventListener tied to a document. When elements are clicked or other events are triggered, Events dispatched to a document are handled by React and trigger the execution of the corresponding composite event.

For shadow DOM, the script inside the main document doesn’t know what’s inside shadow DOM, especially if the component comes from a third-party library, so to keep the details simple, the browser retargets the event. When an event is captured outside the component, events occurring in the Shadow DOM will be targeted to the host element.

This will allow React to process synthesized events without thinking that events with JSX syntax-binding elements in ShadowDOM were triggered.

Of course, the bigger problem is that shadow DOM only separates the inside from the outside, so there is still the possibility of internal conflicts.

life-cycle

The life cycle cycle is a large traversal.

LifeCycle will need to be triggered every time there is a valid route change and the app that has been registered will need to be traversed, unloaded, injected.

LifeCycle will traverse the list of child applications and execute their lifeCycle functions in turn. There is a small question as to how the child lifeCycle functions are retrieved by the main application. If you are as unfamiliar with WebPack as I am, you may be confused by the fact that WebPack is packaged in UMD format. The require function will combine the exported function into a model and attach it to the window. So we can get it.

There’s nothing wrong with lifeCycle per se, except that lifeCycle, which I wrote, is a bit weak on application state dependency. For example, if a fetch is required the first time you enter a child application, it should not be required later. I do this through function caching, but the entire lifecycle execution doesn’t change at all, which doesn’t seem good or elegant.

There is a bit to add here. We want the user’s click to trigger the window.history.pushState event to explicitly change the url of the address bar, but we also need to listen on pushSate to trigger the function switch application. PushState cannot be monitored directly. We need to wrap the window.history.pushState event to listen for history changes by listening for custom events.

export function patchEventListener(event: any, ListerName: string) {
    return function (this: any) {
        const e = new Event(ListerName);
        event.apply(this.arguments)
        window.dispatchEvent(e);
    };
}
window.history.pushState = patchEventListener(window.history.pushState, "cosmos_pushState");
window.addEventListener("cosmos_pushState", routerChange);
Copy the code

Application of communication

Requirements determine implementation.

An ideal microfront might not even need this design because, as we said, the child application doesn’t know it’s running as a child application, but it’s just an ideal. There will still be some need for father-son communication. In general, the communication frequency between parent and child applications and between sub-applications is low and the amount of data is small in the micro front-end.

So in Microcosmos I just used a simple publish subscribe. Oh, shit, that’s too easy for you. B: Stop cursing. Just use it

export function initCosmosStore(initData) {
    return window.MICROCOSMOS_ROOT_STORE = (function () {
        let store = initData;
        let observers: Array<Function> = [];
        function getStore() {
            return store;
        }
        function changeStore(newValue) {
            return new Promise((resolve, reject) = > {
                if(newValue ! == store) {let oldValue = store;
                    store = newValue;
                    resolve(store);
                    observers.forEach(fn= >fn(newValue, oldValue)); }})}function subscribeStore(fn) {
            observers.push(fn);
        }
        return { getStore, changeStore, subscribeStore }
    })()
}
Copy the code

preload

Preload is to reduce the bad time, get more smooth switching effect of application of for some by micro front workbench, the main applications may register the application, a dozen or more children, we often won’t perform them in a short period of time that by preloading, will be able to grab the child early application of data information, make the advantage of micro front end to the limit.

Note that there is a limit to the number of concurrent requests that can be made within a domain by browser. This may vary from browser to browser, such as 6 on Chrome, so we need to slice up the list of sub-applications and invoke them via a promise chain.

This concludes the technical implementation of microcosmos, although there are a few minor details that cannot be covered completely.

conclusion

Although the microfront-end architecture looks simple, if we really want to make a high availability version, there is still a lot of way to go, I believe that in the future we will have a more complete set of microfront-end engineering solutions, rather than limited to containers.

PS: module federation, one of the upcoming webpack5 features, enables JavaScript applications to dynamically load code from another JavaScript application — while sharing dependencies. If the Federated Module consumed by an application does not have the dependencies required in federated Code, Webpack will download the missing dependencies from federated build sources. Webpack makes it easier to load build artifacts from different projects to each other, so let’s take a look at what this will eventually bring to the micro front end.

reference

  • Microservice concept

  • Micro Frontends

  • New immer by ayqy

  • Module Federation