The final release of 2019, talk about the experience of the development and optimization of Electron application in the past six months. Also quite a lot of dry goods, hope to give you a little inspiration.

One project I’ll talk about in the second half of the year is a desktop application we reconfigured using Electron. This application is similar to Dingding or enterprise wechat. The main functions are instant messaging, voice/video, and meeting. The basic functions and interactive experience are similar to PC terminal wechat (in fact, it is modeled). The following figure



The article Outlines

  • Why choose Electron?
  • Process model
  • Technology selection and code organization
  • Performance optimization (hard cargo)
    • 1. Performance analysis
    • 2. Optimize the policy
      • 2.1 Continue to struggle with the white screen
      • 2.2 Catch up with native interactive experience
      • 2.3 Optimize process communication
  • There will still be pits
  • Extended information


Why choose Electron?

The reason is simple: our app is compatible with multiple platforms, native development is inefficient, and we don’t have the resources.

After all, most of the reasons for choosing the Electron framework are similar: poverty, especially for companies that are stuck in the middle.

In order to optimize the client development resources, ‘blending’ became the theme of our client refactoring this year.

Let’s take a look at the basic architecture of our current client:


Mixing means two things to us:

  1. Our application architecture is a mix of technologies. General underlying C/C++, platform native (iOS, Android, PC, MacOS), Web technology
  2. cross-platform


Based on our existing client base and situation, hybrid refactoring naturally divides into two directions:

  1. Business is sinking. To sink GM’s core business. For example, the core modules include message processing, voice/video, conference, and data storage. The core protocols are XMPP and SIP. These modules are low frequency, high performance, and cross-platform, so they are suitable for implementation in C/C++.
  2. Mix the UI. There are many solutions for view layer blending, such as Electron, React Native, Flutter, or HTML Hybrid. We chose to start with Electron because it has a proven track record in desktop development, and there are many large Electron applications on the market, such as VSCode, Atom, Slack. On the mobile side, we are quite conservative with React Native and Flutter and may try it later.


Now that we understand our motivation, it should make sense to look at the diagram above, which is a typical three-tier structure, very similar to MVC:

  • M — universal mixing layer. C/C++ encapsulates core, generic business modules and business data storage.
  • V — UI layer. The view layer, using a cross-platform view solution, uses a native implementation for higher-performance parts. Such as Electron
  • C — Platform bridging layer. It’s a bridge between M and VUniversal mixing layerInterface, and also expose some for the UI layerPlatform specificFeatures. On the desktop, for example, the hybrid layer is bridged through the Node native module, while supplementing some of the missing or imperfect features of Electron.




Process model

Electron’s master-slave process model is basic common sense. Each Electron application has one Main Process and one or more Renderer processes for multiple Web pages. In addition, there are GPU processes, extension processes and so on. The basic structure of Electron can be understood through Electron Application Architecture.

The main process is responsible for creating page Windows, coordinating inter-process communication, and distributing events. For security reasons, native GUI-related apis are not directly accessible from the rendering process; they must be called to the main process via IPC. The shortcoming of this master-slave process model is also obvious, that is, the single point of failure of the master process. The main process crashes or blocks, affecting the response of the entire application. For example, the main process running a long CPU task will block the user interaction events of the rendering process.


For our application, there are currently the following processes and their responsibilities:

1) main process

  • Interprocess communication, window management
  • Global universal service.
  • Something that can only or is appropriate to do in the main process. For example, browser download, global shortcut key processing, tray, session.
  • Maintain some necessary global state
  • The above saidUniversal mixing layerAlso running in this process. Expose interfaces through the Node C++ plug-in.


② Rendering process

Responsible for Web page rendering and specific page business processing.


(3) the Service Worker

Responsible for static resource caching. Cache some web images and audio. Ensure stable loading of static resources.




Technology selection and code organization

Talk about our technology selection.

  • UI framework –React
  • State Management –Mobx
  • International –i18next
  • Packaging –Since the research CLI


The source code organization

/ # build resources And third party DLL SRC/main/ # 🔴 Main process code services/ # 📡** Global service exposed to the rendering process by RPC ** tray.ts # Tray state management shortgrout. ts # Global shortcut distribution Ts # User configuration management Windows. ts # Window management screen-capture. Ts # Screenshot bridge.ts # Bridge layer interface encapsulation context-menu.ts # Right-click menu state.ts # Global state management, save some necessary global state, such as topic, current language, current user... Lib / # encapsulate library bridge. Ts # Bridge layer API separation logger.ts # Log... Bootstrap. ts # startup program index.ts # 🔴 Entry file renderer/ # 🔴 Render process services/ # 📡 Main process global service client Windows. ts # Window management client tray.ts... Webview hooks/... Webview hooks/... Pages / # 🔴 Page Home UI / # 🔴 view code, maintained by the front team store/ # 🔴 status code, maintained by the client team TSX # page entry Settings Login page. Json # 🔴 declare all pages and page configuration. AppletsCopy the code


The eagle-eyed reader will notice that there are UI and Store directories under each page, corresponding to views and states, respectively. Why is that?

First of all this is because the project was developed by two teams, the original native client team and our front end team. Separating view and state has two benefits:

  • The front-end does not need to care about the underlying business of the client, and the client does not need to care about the front-end page implementation. Have clear responsibilities and do their own things.
  • Reduce learning costs. We use Mobx for state management. For the clients, you only need to know a little about Typescript. If familiar with Java, C# that is even less of a problem. Each Store is just a simple class:
class CounterStore extends MobxStore {
  @observable
  public count: number = 0

  @action
  public incr = (a)= > {
    this.count++
  }

  private pageReady() {
    // The page is ready. You can do some preparatory work here

    // Event listening
    // addDisposer adds the release function to the queue to be released when the page exits
    this.addDisposer(
      addListener('someevent'.evt= > {
        this.dosomething(evt)
      })
    )

    // ...
    this.initial()
  }

  private pageWillClose() {
    // Page release, you can do some resource release here
    releaseSomeResource()
  }

  / /...
}
Copy the code


Using Mobx as state management, they are better understood by object-oriented thinking than Redux. In this scenario, simplicity is true;

The separation of state and business logic, the simplification of the front-end page implementation, the view being only a mapping of state, makes our pages and components much more maintainable and reusable.




Performance optimization (hard cargo)

Foreplay over, some performance optimizations for Electron are the highlight of this article.

The Electron is not a silver bullet. You can’t have your cake and eat it. Electron brings a development efficiency boost with its own drawbacks, such as the much-mocked high memory footprint and native client performance differences. A lot of work has also been done to optimize the Electron application.

Performance optimization generally consists of two steps:

  • ① Analyze and find out the problem. Refer to React Performance Measurement and Analysis
  • ② Solve the problem. There are no more than three directions. Refer to The Direction of React Performance Optimization


1. Performance analysis

The best analytics tool is the Performance of Chrome Developer tools. Through the flame diagram, any hint of JavaScript execution can be seen intuitively.


For the main process, the Profile tool can also be used to collect JavaScript execution information once debugging is enabled.

If you want to analyze the execution of a piece of code, you can also use the following command to generate an analysis file and import it into Chrome Performance for analysis:

#Output CPU and heap analysis filesNode --cpu-prof --heap-prof -e "require('request ') ""Copy the code




2. Optimize the policy

2.1 Continue to struggle with the white screen

Even though Electron normally loads JavaScript code from the local file system and has no network load latency, we still need to continue to struggle with blank pages, Because The loading, parsing and execution of JavaScript and other resources still have a considerable cost (see The Cost of JavaScript in 2019). As a desktop application, the slight white screen delay can be felt by the user. We want to make sure that the user doesn’t feel that this is a Web page.

The main factors affecting the Electron white screen are the creation of page Windows, the loading of static resources, JavaScript parsing and execution.

We have made these optimizations for the white screen of the page:


(1) frame screen

The easiest way. Show the skeleton of the page before the resource is fully loaded. Avoid a white screen.

You also need to set the background color or delay the display window to avoid flashing.

VSCode skeleton screen


② Lazy loading

Priority is given to loading core functions to ensure initial loading efficiency and allow users to interact with each other as soon as possible.



  • Code segmentation + Preload: Code segmentation is the most common optimization. We split off hidden content, or sub-priority modules, leaving only the critical path in the startup module. We can also preload these modules when the browser is idle.

  • Delayed loading of Node modules: The load and execution of Nodejs modules are costly, such as module search, module file reading, and then module parsing and execution. These operations are synchronized, and don’t forget the node_modules black hole, where a module may refer to a large number of dependencies….

    The Node application is different from the Electron application. Typically, the Node server application places modules at the top of the file and loads them synchronously. This is intolerable on the Electron user interface. User interface startup speed and interaction congestion are perceived by the user and are less tolerant.

    So fully evaluate the size and dependencies of the module. Alternatively, you can use the packaging tool to optimize and merge Node modules.

  • Prioritize loads: Since we can’t load everything at first, we’ll prioritize them gradually. For example, when we use VSCode to open a file, VSCode shows the code panel first, followed by the directory tree, sidebar, code highlighting, issues panel, initializing various plug-ins…


③ Use modern JavaScript/CSS code

Electron comes pre-loaded with the latest Chrome at the time of each release, which is one of the best things about the front end:

  • Use the latest JavaScript features without being affordable
  • There is no Polyfill, no Run-time helper. Less code and better performance than older browsers
  • We need to voluntarily let go of some old dependencies. Keep the libraries up to date


④ Packaging optimization

Even with the latest and best browsers, packaging tools are useful.

  • Reduce code size: Modern packaging tools have many optimizations, such as Webpack support scope enhancement, tree shaking, code compression, pre-execution… This allows you to merge code, compress code volume, trim redundant code, and reduce runtime burden.
  • Optimize I/O: When we combine modules, we can reduce I/O round-trips for module lookup and loading.


5.v8 Snapshot or v8 Code Cache

Atom has many excellent articles about their experiences in optimizing Atom. For example, they use V8’s Snapshot to optimize startup time.

This is an AOT optimization strategy. Snapshot is simply a Snapshot of the heap. You can think of it as the memory representation of JavaScript code in V8.

It has two advantages: it loads faster than normal JavaScript, and it’s binary, so if you want to be ‘safe’ you can convert the module to snapshot, which is harder to ‘crack’.

But it has more restrictions. The impact on the architecture is significant. For example, there should be no ‘side effects’ during initialization, such as DOM access. Because at ‘compile time’ these things don’t exist.

This article describes How to use V8 snapshot in Electron: How Atom Uses Chromium Snapshots


A more widely used solution is v8 Code Cache. NodeJS 12 starts generating code caches for the built-in libraries ahead of time at build time, increasing startup time by 30%.

With these articles, get an in-depth look at the Code Cache extension read:

  • Code caching for JavaScript developers
  • JavaScript Start-up Performance
  • Improved code caching
  • How can I speed up the startup of node.js applications




⑥ Window preheating and window pool, window resident

In order to keep up with the speed of opening and displaying native Windows, we used a lot of tricks to trade space for time.

For example, in the home page of our application, when the user opens the login page, we will warm up in the background and prepare the loaded resources. After successful login, we can render and display immediately. Window opening delay is very short, basically close to the native window experience.

In a bit of a Hack, we put these Windows off screen and set skipTaskBar to hide or close them.


Windows that are frequently opened/closed can also be optimized using window pools. For example, when a Webview page is opened, it will be selected from the window pool first. When the window pool is empty, a new window will be created. After the page is closed, it will be put back into the window pool for subsequent reuse.

In addition, you can also use resident mode for non-business, generic Windows, such as notifications and picture viewers. These Windows are not released once they are created and open for better results.


Follow up the latest version of Electron

Keep the version up to date.


2.2 Catch up with native interactive experience

Optimizing white screen time is just the beginning, but the interactive experience of using the app is also a very important part. Here are some of our optimization methods:


① Static resource cache

For some network resources, we take some caching methods to ensure that they display speed. At present, we adopt the service-worker + Workbox method. The Service Worker can intercept the network requests of multiple pages, so as to realize the static resource cache across pages. This method is relatively simple to realize.

In addition to Service workers, this can also be done through protocol interception. See protocol. Try it later when you have time, see how it works.


② Preloading mechanism

If you’ve seen my This is probably the most popular Way to open React Fiber, you’ve seen how powerful requestIdleCallback is. React uses requestIdleCallback to schedule some render tasks and make sure the browser responds to user interactions.

This API also has important implications for our application optimization. It allows us to know how well the browser is using resources and to preexecute low-priority tasks using the browser’s idle time. Such as:

  • Render hidden Tab
  • Delayed loading of module code
  • Lazy-loaded images
  • Inactive session
  • Perform low-priority tasks
  • .


For example React code segmentation:

export default function lazy(factory, Fallback) {
  const Comp = l(factory)
  // Preload scheduling
  scheduleIdle({
    name: 'LazyComponent'.size: TaskSize.Heavy,
    task: factory,
    timeout: 2000,})return function LazyComponent(props) {
    return (
      <Suspense fallback={Fallback ? <Fallback /> : null}>
        <Comp {. props} / >
      </Suspense>
    )
  } as typeof Comp
}
Copy the code


Use:

const List = lazy((a)= > import('./List'))
Copy the code


③ Avoid synchronous operations

The Electron can perform I/O through NodeJS, but synchronous I/O must be avoided as much as possible. For example, synchronized file operations and synchronized inter-process communication. They block page rendering and event interaction.


(4) Reduce the load of the main process

The main process of Electron is very important. It is the parent process of all Windows and is responsible for scheduling various resources. If the main process is blocked, the overall application response performance will be affected.

You can do a simple experiment by putting a break point on the main process and you will find that all the page Windows will become unresponsive, even though they are in their own process. This is because all user interactions are distributed by the main process to the renderer process, and the main process is blocked, so the renderer process cannot receive user events.

So don’t let the main process do the dirty work. If you can do it in the render process, do it in the render process. Avoid computation-intensive tasks and synchronous I/O in the main process.


(5) Separate CPU-intensive operations to separate processes or workers to avoid blocking the UI


6 the React to optimize

See Direction of React Performance Optimization


7) give up CSS – in – js

In order to compress run-time performance and do what we can at compile time, we abandoned the CSS-in-JS scheme and wrote styles in pure CSS + BEM. There are two main reasons:

  • Electron uses newer Chrome, and modern CSS is already powerful
  • We use a window warm-up mechanism to parse this part of the CSS code first. The CSS-in- JS solution is dynamically generated when the component is rendered.


⑧ There is no way out, so we have to use the Node native module

Good. There’s a way out




2.3 Optimize process communication

In Electron applications involving multiple pages/Windows, IPC is very frequent and can be a performance bottleneck.


① Do not abuse remote

Remote provides a simple, non-intrusive way to access the main process’s apis and data. Its underlying layer is based on synchronous IPC. You can see how it works in this article.

Where is the pit?

① It is synchronous. ② Properties are obtained dynamically. To ensure that you get the latest value, the remote layer does not cache it, but instead dynamically retrieves it from the main process each time it retrieves an attribute.

For example, get an object in the main process:

/ / main process
global.foo = {
  foo: 1.bar: {
    baz: 2}}Copy the code

Render process access:

import {remote} from 'electron'

JSON.stringify(remote.getGlobal('foo'))
Copy the code

This will trigger 4 synchronization IPC: getGlobal, foo, bar, bar.baz. For complex data, this cost is hard to bear.

Avoid using Remote unless you know what you’re doing.




② Encapsulate the IPC library

To optimize IPC communication, we encapsulated our own set of RPC libraries based on Electron’s IPC interface. The main features are:

  • Asynchronous. There is no synchronization option. Avoid stupidity
  • Message merge. Merge event pushes and deliver them in batches
  • Serialization. Pass the JSON string directly without the Electron interference serialization. The Electron internal serialization is a little more complicated, dealing with special types such as buffers.
  • A consistent, easy-to-use API. Use the same interface to support two-way communication between the main process and the rendering process, as well as between the rendering process and the rendering process.

Here’s an example:

import rpc from 'myrpc'

// How to register
rpc.registerHandler('echo'.async data => {
  return data
})

// Event listening
rpc.on('some-event', (data, source) => {
  // dosomething
})
Copy the code

Client:

import rpc from 'myrpc'

rpc.emit(target, 'some-event') // Target is the received window or main process.

// Method call
const res = await rpc.callHandler(target, 'echo'.'hello-world')
Copy the code


Not enough, we are still optimizing, we will share with you later.




There will still be pits

There were plenty of potholes along the way. Pain and happiness.

  • Window shadows, rounded corners
  • The clipboard is not strong enough
  • Compatibility issues
  • Main process crashes, rendering process does not exit, causing process ‘overflow’
  • Screenshots. First implemented with Electron, it didn’t work well, now it’s a native implementation
  • .




Extended information

  • A view of large IDE technical architectures from VSCode
  • Electron Performance
  • CovalenceConf 2019: Visual Studio Code — The First Second
  • Get Started With Analyzing Runtime Performance
  • electron-link
  • How to Create a V8 Heap Snapshot of a Javascript File and Use It in Electron
  • The State of Atom’s Performance
  • Improving Startup Time




Reply:ivanInto the group of