Blog post link

I introduction


Some of the tool methods implemented in this paper are in the early/testing stage and are still being continuously optimized for reference only…

The previous article “Developing Records of SMB Client Based on Electron” roughly describes the core functions, implementation difficulties and project packaging of the whole SMB client development. This article separately takes out the file sharding upload module to share. Mention some feature points related to Electron main process, render process, and file upload optimization.

II the Demo


DEMO address of the project lite version. Redundant logic for SMB processing is deleted. File replication is used to simulate the upload process.

When demo runs, you need to open two development environments respectively view -> service, and then preview the interface. Since there is no back-end, The default file upload (copy) to the electron data directory (on Ubuntu is ~ /. Config/FileSliceUpload/runtime/upload)

Enter the view directory
$: npm install
$: npm start
Enter the service directory
$: npm install
$: npm start
# One click package script - See help
$: node build.js --help
# App Packaging - Packaged for Linux/MAC/Win platforms
$: node build.js build-linux
$: node build.js build-mac
$: node build.js build-win
Copy the code

III Electron process architecture


The difference between the main and renderer processes

Electron a process running the package.json main script is called the main process. Scripts run in the main process display the user interface by creating web pages, and a Electron application always has one and only one main process. The main process uses BrowserWindow instances to create pages, and each BrowserWindow instance runs the page in its own renderer. When an instance of BrowserWindow is destroyed, the corresponding renderer is terminated. The main process manages all the Web pages and their corresponding renderers, each of which is independent and only cares about the Web page it is running on.

In normal browsers, Web pages typically run in a sandbox environment and do not have access to the operating system’s native resources. However, Electron users can perform some low-level interactions with the operating system on the page with the support of the Node.js API. Calling native GUI-related apis in a web page is not allowed, because manipulating native GUI resources in a Web page is dangerous and can lead to resource leaks. If you want to perform GUI actions on a Web page, the corresponding renderer must communicate with the main process and request the main process to perform related GUI actions.

Communication between the main process and the renderer

1/2- native methods, 3- external extension methods

  1. Remote call using remote

The Remote module provides an easy way for the renderer to communicate with the main process. Using the Remote module, you can call methods on main process objects without having to explicitly send interprocess messages. As an example, the code creates a renderer using a remote call to BrowserWindows from the main process and loads a web page address:

/* In the renderer process (web side code) */
const { BrowserWindow } = require('electron').remote
let win = new BrowserWindow({ width: 800.height: 600 })
win.loadURL('https://github.com')
Copy the code

Note: Remote is based on IPC synchronous process communication (synchronization = blocking page), it is known that node.js is the biggest feature of asynchronous call, non-blocking IO, so remote call is not suitable for the main process and renderer process frequent communication and time-consuming requests, otherwise it will cause serious program performance problems.

  1. Communicate using IPC signals

Based on event-triggered IPC two-way signal communication, the ipcRenderer in the renderer can listen on an event channel or send messages directly to the main process or to other renderers (need to know the webContentSIDs of other renderers). Similarly, ipcMain in the main process can listen for an event channel and send messages to any render process.

/* Main process */
ipcMain.on(channel, listener) // Listen for channel - asynchronous trigger
ipcMain.once(channel, listener) // Listen on channel once, delete after listener trigger - asynchronous trigger
ipcMain.handle(channel, listener) // Set the channel listener for the renderer's invoke function
ipcMain.handleOnce(channel, listener) // Set the channel listener for the renderer's invoke function, and delete the listener when triggered
browserWindow.webContents.send(channel, args); // Explicitly send information to a renderer process - asynchronously triggered


/* Render process */
ipcRenderer.on(channel, listener); // Listen for channel - asynchronous trigger
ipcRenderer.once(channel, listener); // Listen on channel once, delete after listener trigger - asynchronous trigger
ipcRenderer.sendSync(channel, args); // Send a message to the main process on a channel - synchronous trigger
ipcRenderer.invoke(channel, args); // Sends a message to the main process on a channel - returns a Promise object to be triggeredipcRenderer.sendTo(webContentsId, channel, ... args);// Send a message to a renderer - asynchronously triggeredipcRenderer.sendToHost(channel, ... args)// Send a message to the webView of the host page - asynchronously triggered
Copy the code
  1. Electron – Re is used for multidirectional communication

Electron – Re is a previously developed tool for handling electron interprocess communication that has been released as an NPM component. The Main function is to create a separate service logic based on Electron’s existing concepts of the Main Process and Renderer Process. A service is a background process that does not need to display an interface. It does not participate in UI interaction and provides services for the main process or other rendering processes. Its underlying implementation is a rendering window process that allows Node injection and remote calls.

For example, if you’ve seen some Electron best practices where cpu-consuming operations are not recommended to be processed in the main process, you can write the cpu-consuming operations into a separate JS file. Then use the service constructor to construct a service instance with the address path of the js file as an argument and send messages between the main process, the renderer process, and the service process using the MessageChannel provided by electron-re, as shown in the following example code:

  • 1) the main process
const {
  BrowserService,
  MessageChannel // must required in main.js even if you don't use it
} = require('electron-re');
const isInDev = process.env.NODE_ENV === 'dev'; .// after app is ready in main process
app.whenReady().then(async() = > {const myService = new BrowserService('app'.'path/to/app.service.js');
    const myService2 = new BrowserService('app2'.'path/to/app2.service.js');

    await myService.connected();
    await myService2.connected();

    // open devtools in dev mode for debugging
    if (isInDev) myService.openDevTools();
    // send data to a service - like the build-in ipcMain.send
    MessageChannel.send('app'.'channel1', { value: 'test1' });
    // send data to a service and return a Promise - extension method
    MessageChannel.invoke('app'.'channel2', { value: 'test1' }).then((response) = > {
      console.log(response);
    });
    // listen a channel, same as ipcMain.on
    MessageChannel.on('channel3'.(event, response) = > {
      console.log(response);
    });

    // handle a channel signal, same as ipcMain.handle
    // you can return data directly or return a Promise instance
    MessageChannel.handle('channel4'.(event, response) = > {
      console.log(response);
      return { res: 'channel4-res' };
    });
});
Copy the code
  • 2) app. Service. Js
const { ipcRenderer } = require('electron');
const { MessageChannel } = require('electron-re');

// listen a channel, same as ipcRenderer.on
MessageChannel.on('channel1'.(event, result) = > {
  console.log(result);
});

// handle a channel signal, just like ipcMain.handle
MessageChannel.handle('channel2'.(event, result) = > {
  console.log(result);
  return { response: 'channel2-response'}});// send data to another service and return a promise , just like ipcRenderer.invoke
MessageChannel.invoke('app2'.'channel3', { value: 'channel3' }).then((event, result) = > {
  console.log(result);
});

// send data to a service - like the build-in ipcRenderer.send
MessageChannel.send('app'.'channel4', { value: 'channel4' });
Copy the code
  • 3) app2. Service. Js
// handle a channel signal, just like ipcMain.handle
MessageChannel.handle('channel3'.(event, result) = > {
  console.log(result);
  return { response: 'channel3-response'}});// listen a channel, same as ipcRenderer.once
MessageChannel.once('channel4'.(event, result) = > {
  console.log(result);
});
// send data to main process, just like ipcRenderer.send
MessageChannel.send('main'.'channel3', { value: 'channel3' });
// send data to main process and return a Promise, just like ipcRenderer.invoke
MessageChannel.invoke('main'.'channel4', { value: 'channel4' });
Copy the code
  • 4) Renderer Process Window
const { ipcRenderer } = require('electron');
const { MessageChannel } = require('electron-re');
// send data to a service
MessageChannel.send('app', ....);
MessageChannel.invoke('app2', ....);
// send data to main process
MessageChannel.send('main', ....);
MessageChannel.invoke('main', ....);
Copy the code

IV File upload architecture


The main logical control part of file uploading is front-end JS script code, located in the render process of the main window, which is responsible for obtaining system directory files, generating upload task queue, dynamically displaying upload task list details, adding, deleting, checking and modifying task list, etc. The Node.js code at the Electron end of the main process is mainly responsible for adding, deleting, checking and modifying file uploading task queue data, synchronizing upload tasks in memory and disk, file system interaction, and invoking system native components in response to the control command of render process.

File upload source and upload target

  • Use in the user interfaceInputThe FileList(HTML5 API for simple file manipulation on the Web side) retrieved by the component is the upload source.
  • The target address of the upload is the SMB service of a Node in the remote cluster. Because the Node.js NPM ecosystem has limited support for SMB, no NPM library has been found that can support file fragment uploading through SMB protocol. Therefore, it is considered to use node.js FS API to read files in segments and then incrementally write fragment data to the target address to simulate the file fragment upload process, so as to realize the operation of starting, suspending, terminating and continuing the uploading task of a single large file on the interface. So the solution here is to connect to a back-end share using Windows UNC commands to access a remote SMB share path, such as a file path, as if it were a local file system\\[host]\[sharename]\file1After unc connection, file1 on node.js can be operated through the FS API, just like local files. The entire process of having to rely on the SMB protocol to upload files is reduced to copying data from a local file to another SMB shared path that can be accessed locally, all thanks to WindowsUNCCommand.
/* Connect to the remote SMB share using the unc command */
_uncCommandConnect_Windows_NT({ host, username, pwd }) {
    const { isThirdUser, nickname, isLocalUser } = global.ipcMainProcess.userModel.info;
    const commandUse = `net use \\\\${host}\\ipc$ "${pwd}" /user:"${username}"`;
    return new Promise((resolve) = > {
      this.sudo.exec(commandUse).then((res) = > {
        resolve({
          code: 200}); }).catch((err) = > {
        resolve({
          code: 600.result: global.lang.upload.unc_connection_failed
        });
      });
    });
  }
Copy the code

Upload Process Overview

The following diagram depicts the control logic for the entire front-end section:

  1. Use on the page<Input />The component takes a FileList object (the File object taken in the Electron environment will attach an additional onepathProperty specifies the absolute path to the file on the system.
  2. Cache the FileList. Wait until you click the upload button to read the FileList list and generate a self-defined array of File objects to store the list of upload tasks
  3. The page calls init with the selected file information to initialize the file upload task
  4. Node.js takes the file information attached to the init request, stores all the information into the file upload list temporarily stored in memory, and tries to open the file descriptor for the upcoming file slice upload. Finally, it returns the page upload task ID, and the Node.js side completes the initialization process
  5. Page to get the init request is successful after the callback, store the returned upload task ID, and to add the file to the file upload queue, at the right time to start upload, start to upload to the Node. Js end sends the upload request, at the same time request attached task ID and the current subdivision index value (say what need to upload the file fragmentation)
  6. Node.js gets the upload request and reads the upload task information in memory according to the task ID carried, then slices the target file in the local disk by using the file descriptor opened in the second step and the fragment index. Finally, the FS API is used to write the fragment increment into the target location, that is, the locally accessible SMB shared path
  7. Upload after the request is successful, the page determines whether all fragments have been uploaded. If so, it sends a complete request to Node.js with the task ID
  8. Node.js obtains file information based on the task ID, closes the file descriptor, and changes the status of the file uploading task to upload completed
  9. After the uploading task list is complete, the backend sends the sync request to synchronize the uploading task list to the historical task (disk storage), indicating that all tasks in the current task list are complete
  10. After receiving the sync request, Node.js writes all file upload list information stored in the memory to disk, and releases the memory usage to complete a list task upload

Node.js implements file shard management factories

  • Called when the file is initializedopenMethod temporarily stores the mapping between file descriptors and file absolute paths.
  • Called when a file is uploadedreadMethods Slice files according to read location and read capacity.
  • Called when the file upload is completecloseClose the file descriptor;

All three methods are associated with the file absolute path parameter:

/** * readFileBlock */
exports.readFileBlock = () = > {

  const fdStore = {};
  const smallFileMap = {};

  return {
    /* Open the file descriptor */
    open: (path, size, minSize=1024*2) = > {
      return new Promise((resolve) = > {
        try {
          // Small files can be read directly without opening the file descriptor
          if (size <= minSize) {
            smallFileMap[path] = true;
            return resolve({
              code: 200.result: {
                fd: null}}); }// Open the file descriptor. Suggest mapping between absolute path and fd
          fs.open(path, 'r'.(err, fd) = > {
            if (err) {
              console.trace(err);
              resolve({
                code: 601.result: err.toString()
              });
            } else {
              fdStore[path] = fd;
              resolve({
                code: 200.result: {
                  fd: fdStore[path] } }); }}); }catch (err) {
          console.trace(err);
          resolve({
            code: 600.result: err.toString() }); }})},/* Read the file block */
    read: (path, position, length) = > {
      return new Promise((resolve, reject) = > {
        const callback = (err, data) = > {
          if (err) {
            resolve({
              code: 600.result: err.toString()
            });
          } else {
            resolve({
              code: 200.result: data }); }};try {
          // Small files are read directly. Large files are read using file descriptors and offsets
          if (smallFileMap[path]) {
            fs.readFile(path, (err, buffer) = > {
              callback(err, buffer);
            });
          } else {
            // Empty file processing
            if (length === 0) return callback(null.' ');
            fs.read(fdStore[path], Buffer.alloc(length), 0, length, position, function(err, readByte, readResult){ callback(err, readResult); }); }}catch (err) {
          console.trace(err);
          resolve({
            code: 600.result: err.toString() }); }}); },/* Close the file descriptor */
    close: (path) = > {
      return new Promise((resolve) = > {
        try {
          if (smallFileMap[path]) {
            delete smallFileMap[path];
            resolve({
              code: 200
            });
          } else {
            fs.close(fdStore[path], () = > {
              resolve({code: 200});
              deletefdStore[path]; }); }}catch (err) {
          console.trace(err);
          resolve({
            code: 600.result: err.toString() }); }}); }, fdStore } }Copy the code

V Based on Electron file upload caton optimization step pit


Tuning is a big deal because you need to do a lot of testing to find performance bottlenecks in your existing code and then write an optimization solution. I find it hard to find performance bottlenecks because you’re writing your own code and you tend to fall into stereotypes. But the most important thing is that if you don’t know what stack you are using, there is no way to optimize. Therefore, I have analyzed the knowledge of the Electron process and combed the whole upload process in a large part.

Use Devtools delivered with Electron for performance analysis

During file uploading, open Performance to record and analyze the entire process:

During file upload, open the Memory tool to intercept the snapshot and analyze the Memory usage at a moment:

First attempt to fix the problem: replace the Antd Table component

After writing complete file upload module, the initial pressure test, the results showed that adding 1000 file upload task to task queue, and upload file upload tasks at the same time the number is 6, sliding up and down to view file upload lists the caton case, this caton is not confined to a particular interface components of card, In addition, all the operations of the current window are stuck, which is initially suspected to be caused by the Antd Table component. Because THE Antd Table component is a very complex high-order component, which may have performance problems when processing a large amount of data, I changed the Antd Table component into the original Table component. In order to avoid this problem, the Table list only shows the task name of each upload task, and the rest, such as the upload progress, are not displayed. Surprisingly, even after switching to the native Table component, there was no improvement in stutter!

The second attempt to solve the problem was to modify the Electron main process synchronization blocking code

Taking a look at the Chromium architecture, each renderer process has a global object named RenderProcess that manages communication with the parent browser process and maintains a global state. The browser process maintains a RenderProcessHost object for each renderer process to manage browser state and communicate with the renderer process. Browser processes and renderers communicate using Chromium’s IPC system. In Chromium, UIprocess needs IPC synchronization with main process continuously during page rendering. If main process is busy, UIprocess will block during IPC.

To sum up: if the process continued consumption of CPU time and blocking the sync IO mission, main process can block to a certain extent, thus affecting the IPC communication between the main process and the rendering process, the IPC communication are delayed or blocked, natural rendering UI rendering of the interface and updates will be presenting the state caton.

I analyzed the code logic of file task management on node.js side, and replaced some synchronous blocking IO calls such as obtaining file size, obtaining file type, and deleting files with asynchronous calls advocated by Node.js, namely FS callback or FS Promise chain calls. After the change, it was found that the situation was not significantly improved, so a third attempt was made.

Third attempt to solve the problem: write node.js process pool separation upload task management logic

This is a big change 😕

  1. Simple implementation of node.js process pool

Source: ChildProcessPool. Class. Js, the main logic is the use of the Node. Js child_process module (see the document specific use) to create a specified number of more child processes, external pool for one of the available through the process, in the process execution need to code logic, In the process pool, one of the created child processes can be returned to the external call in order to avoid excessive use of one of the processes. The code is omitted as follows:

.class ChildProcessPool {
  constructor({ path, max=6, cwd, env }) {
    this.cwd = cwd || process.cwd();
    this.env = env || process.env;
    this.inspectStartIndex = 5858;
    this.callbacks = {};
    this.pidMap = new Map(a);this.collaborationMap = new Map(a);this.forked = [];
    this.forkedPath = path;
    this.forkIndex = 0;
    this.forkMaxIndex = max;
  }
  /* Received data from a child process */
  dataRespond = (data, id) = >{... }/* Received data from all child processes */
  dataRespondAll = (data, id) = >{... }/* Get a process instance from the pool */
  getForkedFromPool(id="default") {
    let forked;

    if (!this.pidMap.get(id)) {
      // create new process
      if (this.forked.length < this.forkMaxIndex) {
        this.inspectStartIndex ++;
        forked = fork(
          this.forkedPath,
          this.env.NODE_ENV === "development" ? [`--inspect=The ${this.inspectStartIndex}`] : [],
          {
            cwd: this.cwd,
            env: { ...this.env, id },
          }
        );
        this.forked.push(forked);
        forked.on('message'.(data) = > {
          const id = data.id;
          delete data.id;
          delete data.action;
          this.onMessage({ data, id });
        });
      } else {
        this.forkIndex = this.forkIndex % this.forkMaxIndex;
        forked = this.forked[this.forkIndex];
      }

      if(id ! = ='default')
        this.pidMap.set(id, forked.pid);
      if(this.pidMap.values.length === 1000)
        console.warn('ChildProcessPool: The count of pidMap is over than 1000, suggest to use unique id! ');

      this.forkIndex += 1;
    } else {
      // use existing processes
      forked = this.forked.filter(f= > f.pid === this.pidMap.get(id))[0];
      if(! forked)throw new Error(`Get forked process from pool failed! the process pid: The ${this.pidMap.get(id)}. `);
    }

    return forked;
  }

  /**
    * onMessage [Received data from a process]
    * @param  {[Any]} data [response data]
    * @param  {[String]} id [process tmp id]
    */
  onMessage({ data, id }){... }/* Send request to a process */
  send(taskName, params, givenId="default") {
    if (givenId === 'default') throw new Error('ChildProcessPool: Prohibit the use of this id value: [default] ! ')

    const id = getRandomString();
    const forked = this.getForkedFromPool(givenId);
    return new Promise(resolve= > {
      this.callbacks[id] = resolve;
      forked.send({action: taskName, params, id });
    });
  }

  /* Send requests to all processes */
  sendToAll(taskName, params) {...}
}
Copy the code
  • 1) usesendandsendToAllMethod sends a message to a child process. If the request parameter is specified with an ID, it indicates that the process with which this ID has been mapped needs to be explicitly used and that the response result from this process is expected. The latter sends signals to all processes in the process pool and expects the results returned by all processes (for external invocation by the caller).
  • 2) thedataRespondanddataRespondAllMethod returns data callback functions corresponding to the above two signal sending methods. The former gets the callback result of a specified process in the process pool, while the latter gets the callback result of all processes in the process pool (internal process pool method, caller need not care).
  • 3)getForkedFromPoolPool got a method from the process, if the process pool is not a child process or has less than set the number of child processes to create the maximum number of child processes, you can create new so will first create a child process into the process of pool, and then return the child process for call (internal methods, process pool caller need not concern).
  • 4)getForkedFromPoolNotable in the method is this line of code:this.env.NODE_ENV === "development" ? [`--inspect=${this.inspectStartIndex}`] : [], added when running js scripts using Node.js- -inspect= Port numberThis parameter enables the remote debugging port of the running process. Tracking the status of multi-process programs is often difficult, so you can use Devtools to debug each process separately (specifically, enter the address in the browser:chrome://inspect/#devicesThen open the debug configuration item, configure the debug port number specified on our side, and click the blue wordOpen dedicated DevTools for NodeCan open a debugging window, you can code process breakpoint debugging, single step debugging, step out, running variables to view operations, very convenient! .

  1. Separate child process communication logic from business logic

In addition, I can use my wrapped processhost.class.js, which I call the process transaction manager, in js files loaded as child process execution files. The main function is to use API such as -processhost.registry (taskName, func) to register various tasks. Then, in the main process, you can directly use the process pool to obtain a process, send a request to a task, and obtain a Promise object to get the data returned by the process callback. To avoid overly focusing on the communication of data between processes while writing code in our child process execution file. If we don’t use the process transaction manager, we need to use process.send to send messages to one process and process.on(‘message’, processor) to process messages in the other process. Note that if the registered task is asynchronous, you need to return a Promise object instead of directly returning the data. The abbreviated code is as follows:

  • 1) Registry is used for child processes to register their tasks with the transaction center
  • 2) Unregistry is used to cancel task registration
  • 3) handleMessage processes the message received by the process and according toactionParameter invokes a task
class ProcessHost {
  constructor() {
    this.tasks = { };
    this.handleEvents();
    process.on('message'.this.handleMessage.bind(this));
  }

  /* events listener */
  handleEvents(){... }/* received message */
  handleMessage({ action, params, id }) {
    if (this.tasks[action]) {
      this.tasks[action](params)
      .then(rsp= > {
        process.send({ action, error: null.result: rsp || {}, id });
      })
      .catch(error= > {
        process.send({ action, error, result: error || {}, id });
      });
    } else {
      process.send({
        action,
        error: new Error(`ProcessHost: processor for action-[${action}] is not found! `),
        result: null, id, }); }}/* registry a task */
  registry(taskName, processor) {
    if (this.tasks[taskName]) console.warn(`ProcesHost: the task-${taskName}is registered! `);
    if (typeofprocessor ! = ='function') throw new Error('ProcessHost: the processor must be a function! ');
    this.tasks[taskName] = function(params) {
      return new Promise((resolve, reject) = > {
        Promise.resolve(processor(params))
          .then(rsp= > {
            resolve(rsp);
          })
          .catch(error= >{ reject(error); }); })}return this;
  };

  /* unregistry a task */
  unregistry(taskName){... };/* disconnect */
  disconnect() { process.disconnect(); }

  /* exit */
  exit(){ process.exit(); }}global.processHost = global.processHost || new ProcessHost();
module.exports = global.processHost;
Copy the code
  1. ChildProcessPool and ProcessHost are used together

See the full demo above for details

1) Import the process pool class in main.js (in main process) and create the process pool instance

  • | –pathThe parameter is the executable file path
  • | –maxSpecifies the maximum number of child process instances created by the process pool
  • | –envIs an environment variable passed to the child process
/* main.js */.const ChildProcessPool = require('path/to/ChildProcessPool.class');

global.ipcUploadProcess = new ChildProcessPool({
  path: path.join(app.getAppPath(), 'app/services/child/upload.js'),
  max: 3.// process instance
  env: { lang: global.lang, NODE_ENV: nodeEnv } }); .Copy the code

2) service.js (in main processs) example: Use the process pool to send the initial shard upload request

 /** * init [initialize upload] *@param  {[String]} Host [host name] *@param  {[String]} Username [username] *@param  {[Object]} File [File description object] *@param  {[String]} Abspath [absolute path] *@param  {[String]} Sharename [sharename] *@param  {[String]} Fragsize [Fragment size] *@param  {[String]} Prefix [destination upload address prefix] */
  init({ username, host, file, abspath, sharename, fragsize, prefix = ' ' }) {
    const date = Date.now();
    const uploadId = getStringMd5(date + file.name + file.type + file.size);
    let size = 0;

    return new Promise((resolve) = > {
        this.getUploadPrepath
        .then((pre) = > {
          /* Look here look here! look here! * /
          return global.ipcUploadProcess.send(
            /* Process transaction name */
            'init-works'./* Carry parameters */
            {
              username, host, sharename, pre, prefix, size: file.size, name: file.name, abspath, fragsize, record: 
              {
                host, / / host
                filename: path.join(prefix, file.name), / / file name
                size, // File size
                fragsize, // Fragment size
                abspath, // Absolute path
                startime: getTime(new Date().getTime()), // Upload date
                endtime: ' '.// Upload date
                uploadId, / / task id
                index: 0.total: Math.ceil(size / fragsize),
                status: 'uploading' // Upload status}},/* Specifies a process call id */
            uploadId
          )
        })
      .then((rsp) = > {
        resolve({
          code: rsp.error ? 600 : 200.result: rsp.result,
        });
      }).catch(err= > {
        resolve({
          code: 600.result: err.toString()
        });
      });
    });
  }
Copy the code

Child.js is the nodejs script code for the path parameter passed when creating the process pool. In this script, we register multiple tasks to handle messages sent from the process pool. This code logic is separated into child processes separately, where:

  • UploadStore – Maintains a list of upload tasks in memory. UploadStore allows you to add, delete, and modify the list of upload tasks.
  • FileBlock – Uses the FS API to manipulate files, such as opening a file descriptor, reading a Buffer from a file based on descriptors and sharding index values, closing file descriptors, and so on. Although both asynchronous IO reads and writes have little impact on performance, in order to integrate the nodeJS side upload processing process, it is also included in the sub-process management. For details, you can view the source code for understanding: source code
  const fs = require('fs');
  const fsPromise = fs.promises;
  const path = require('path');

  const utils = require('./child.utils');
  const { readFileBlock, uploadRecordStore, unlink } = utils;
  const ProcessHost = require('./libs/ProcessHost.class');

  // read a file block from a path
  const fileBlock = readFileBlock();
  // maintain a shards upload queue
  const uploadStore = uploadRecordStore();

  global.lang = process.env.lang;

  /* *************** registry all tasks *************** */

  ProcessHost
    .registry('init-works'.(params) = > {
      return initWorks(params);
    })
    .registry('upload-works'.(params) = > {
      return uploadWorks(params);
    })
    .registry('close'.(params) = > {
      return close(params);
    })
    .registry('record-set'.(params) = > {
      uploadStore.set(params);
      return { result: null };
    })
    .registry('record-get'.(params) = > {
      return uploadStore.get(params);
    })
    .registry('record-get-all'.(params) = > {
      return (uploadStore.getAll(params));
    })
    .registry('record-update'.(params) = > {
      uploadStore.update(params);
      return ({result: null});
    })
    .registry('record-remove'.(params) = > {
      uploadStore.remove(params);
      return { result: null };
    })
    .registry('record-reset'.(params) = > {
      uploadStore.reset(params);
      return { result: null };
    })
    .registry('unlink'.(params) = > {
      return unlink(params);
    });


  /* *************** upload logic *************** */

  /* Upload initialization work */
  function initWorks({username, host, sharename, pre, prefix, name, abspath, size, fragsize, record }) {
    const remotePath = path.join(pre, prefix, name);
    return new Promise((resolve, reject) = > {
      new Promise((reso) = > fsPromise.unlink(remotePath).then(reso).catch(reso))
      .then(() = > {
        const dirs = utils.getFileDirs([path.join(prefix, name)]);
        return utils.mkdirs(pre, dirs);
      })
      .then(() = > fileBlock.open(abspath, size))
      .then((rsp) = > {
        if (rsp.code === 200) {
          constnewRecord = { ... record, size,// File size
            remotePath,
            username,
            host,
            sharename,
            startime: utils.getTime(new Date().getTime()), // Upload date
            total: Math.ceil(size / fragsize),
          };
          uploadStore.set(newRecord);
          return newRecord;
        } else {
          throw new Error(rsp.result);
        }
     })
     .then(resolve)
     .catch(error= >{ reject(error.toString()); }); })}...Copy the code

Fourth attempt to fix the problem: review the renderer front-end code

  • Third optimization is still not obvious on the improvement of the caton, I began to doubt whether the front-end code directly influence the rendering process of caton, after all, the front is not lazy loading mode is adopted to improve the loaded file upload (rejected by me before this suspicion, because front-end code fully used before the browser object storage file upload shard development logic, Interface lag is not detected in object store file uploads, which is strange). With preconceived notions aside, Electron is a little different from the browser environment, and there is no problem with the front-end code.
  • After taking a closer look at the code logic that might consume CPU computation, I found a function for refreshing the upload taskrefreshTasks, the main logic is to traverse all the original object array of unuploaded files, and then select a fixed number of files (the number depends on the number of simultaneous upload tasks) into the list of files to be uploaded. I found that ifNumber of files to be uploaded = Number of concurrent upload tasksYou don’t have to go through the rest of the file’s original object array. It’s the absence of this judgment conditionrefreshTasksThis frequently operated function may execute the inner judgment logic of the for loop thousands more times per execution (the number of executions increases by O(n), where n is the number of tasks in the current task list).
  • After adding a line of detection logic code, the previous 1000 upload tasks will increase to about 10000. Although there is still a little bit of lag, it is not so much that it cannot be used, and there is still room for optimization in the future!

conclusion


The Electron technology was applied to the actual project for the first time, and there were many problems: the communication between render process and main process, cross-platform compatibility, multi-platform packaging, window management… In short, I gained a lot of experience and sorted out some general solutions. Electron application program very much, now is the front-end students across the full desktop software development in the field of another milestone, but need to change my thinking mode, simply write a front-end code is dealing with some more simple interface logic and a small amount of data, involving files, system operation, process, thread, native interactive knowledge is relatively small, Learning more about computer operating systems, code design patterns and some basic algorithm optimization will make you even more qualified for the Electron desktop software development task!