With the following test questions from Tencent, Ali, netease, Ele. me, Meituan.com, Pinduoduo, Baidu and other big factory comprehensive often test the topic.

How to write a great resume

A resume is not something to keep track of. It’s something to let employers know your strengths.

Have at ordinary times the charge service that makes a few modify resume, also calculate to have seen quite many resume. Many resumes have the following characteristics

  • Like to say their own strengths, advantages, choose and employ persons really do not pay attention to your character sunshine and so on
  • Personal skills can take up half a page, and they’re all the same length
  • Project experience, for example, I will use this API to implement a function
  • There are so many pages in my resume that I really can’t read it

Above similar resume can say that choose and employ persons side also saw countless, catch not your bright spot completely. Unless you’ve worked at a big factory or have a good educational background or technology stack, it’s pretty much luck getting an interview.

Here are some of the tips I often give people to revise their resumes:

Keep your resume to two or fewer pages

  • Note the case of technical nouns
  • Highlight personal highlights, expansion content. For example, how to find the Bug in the project, the process of solving the Bug; How the performance problem was discovered, how the performance problem was solved, and how much performance was improved; For example, why such selection, what is the purpose, what advantages compared with others and so on. The general idea is not to write a running account, but to highlight your ability to solve problems and think independently on a project.
  • Consider the words familiar, proficient, don’t dig yourself a hole
  • Make sure you have something to say for every technical point you put on it, so you don’t lose points when the interviewer asks you a technical point and you just say, “I use apis.

Do the above, and then in the process of sending out a resume with a cover letter, you believe can help a lot of job road.

JS related

What about variable enhancement?

When executing JS code, the execution environment will be generated. As long as the code is not written in the function or in the global execution environment, the code in the function will generate the function execution environment, which is the only two kinds of execution environment.

Let’s take a look at an old cliche, var

b() // call b
console.log(a) // undefined

var a = 'Hello world'

function b() {
    console.log('call b')}Copy the code

The above output must be clear to you, because functions and variables are promoted. The usual explanation for upgrading is to move the declared code to the top, but there is nothing wrong with this and it is easy to understand. But it would be more accurate to say that there are two phases when generating the execution environment. The first stage is to create stage, JS interpreter will find out the need to increase variables and functions, and give them open up well in the memory space in advance, function words will the entire function in the memory, variable declarations and assignment is undefined, so in the second phase, namely code execution stage, we can directly use ahead of time.

In promotion, the same function overrides the previous one, and the function takes precedence over the variable promotion

b() // call b second

function b() {
    console.log('call b fist')}function b() {
    console.log('call b second')}var b = 'Hello world'
Copy the code

Var causes a lot of errors, so let was introduced in ES6. A let cannot be used before the declaration, but this is not to say that the let will not be promoted. If the let is promoted, memory has been cleared for it in the first phase, but because of the nature of the declaration, it cannot be used before the declaration.

Bind, call, and apply are different

Let’s start with the difference between the first two.

Both call and apply solve for changing the direction of this. The function is the same, but the way of passing the parameters is different.

In addition to the first argument, Call can accept a list of arguments, and apply only accepts an array of arguments.

let a = {
    value: 1
}
function getValue(name, age) {
    console.log(name)
    console.log(age)
    console.log(this.value)
}
getValue.call(a, 'yck'.'24')
getValue.apply(a, ['yck'.'24'])
Copy the code

Bind works the same as the other two methods, except that it returns a function. And we can do currying with bind.

How to implement a bind function

There are several ways to think about implementing the following functions

  • If the first argument is not passed, the default iswindow
  • Changed the reference to this so that new objects can execute the function. So can the idea be to add a function to the new object and then delete it after the execution?
Function.prototype.myBind = function (context) {
  if (typeof this! = ='function') {
    throw new TypeError('Error')}var _this = this
  var args = [...arguments].slice(1)
  // Return a function
  return function F() {
    // We return a function, so we can use new F()
    if (this instanceof F) {
      return new_this(... args, ... arguments) }return_this.apply(context, args.concat(... arguments)) } }Copy the code

How to implement a call function

Function.prototype.myCall = function (context) {
  var context = context || window
  // Add an attribute to the context
  // getValue.call(a, 'yck', '24') => a.fn = getValue
  context.fn = this
  // Fetch the arguments after the context
  var args = [...arguments].slice(1)
  // getValue.call(a, 'yck', '24') => a.fn('yck', '24')
  varresult = context.fn(... args)/ / delete the fn
  delete context.fn
  return result
}
Copy the code

How do I implement an apply function

Function.prototype.myApply = function (context) {
  var context = context || window
  context.fn = this

  var result
  // We need to determine whether to store the second parameter
  // If so, expand the second argument
  if (arguments[1]) { result = context.fn(... arguments[1])}else {
    result = context.fn()
  }

  delete context.fn
  return result
}
Copy the code

What about the prototype chain?

Every Function has a prototype property, except function.prototype.bind (), which points to the prototype.

Each object has a __proto__ attribute, which refers to the prototype of the constructor that created the object. This property refers to [[prototype]], but [[prototype]] is an internal property that we cannot access, so we use _proto_ to access it.

An object can find properties that do not belong to it by using __proto__, which connects objects to form a chain of archetypes.

If you want to take a closer look at the prototype, you can read through the difficulties in the prototype in depth.

How to determine the object type?

  • Can be achieved byObject.prototype.toString.call(xx). So we can get something like this[object Type]Is a string of.
  • instanceofThe type of the object can be correctly determined because the internal mechanism is to determine whether the type can be found in the prototype chain of the objectprototype.

Characteristic of the arrow function

function a() {
    return (a)= > {
        return (a)= > {
        	console.log(this)}}}console.log(a()()())
Copy the code

There is no “this” in the arrow function. The “this” in this function depends only on the “this” of the first function outside of it that is not an arrow function. In this example, this is window because the call to a corresponds to the first case in the previous code. And once this is context-bound, it will not be changed by any code.

This

This is a concept that many people get confused about, but it’s not difficult at all. You just need to remember a few rules.

function foo() {
	console.log(this.a)
}
var a = 1
foo()

var obj = {
	a: 2.foo: foo
}
obj.foo()

// In both cases, 'this' depends only on the object before the function is called, and the priority of the second case is greater than that of the first

// The following is the highest priority: 'this' is bound only to' c 'and will not be modified in any way by' this'
var c = new foo()
c.a = 3
console.log(c.a)

// Another option is to use call, apply, and bind to change this to a priority next to new
Copy the code

Advantages and disadvantages of async and await

Async and await have the advantage of processing the chain of then calls, which is more clear and accurate than using Promise directly. The downside is that abusing the await can cause performance problems because the await blocks code that may not be dependent on the subsequent asynchronous code but still needs to wait for the former to complete, causing the code to lose concurrency.

Let’s look at a code that uses await.

var a = 0
var b = async () => {
  a = a + await 10
  console.log('2', a) / / - > '2' 10
  a = (await 10) + a
  console.log('3', a) / / - > '3' 20
}
b()
a++
console.log('1', a) / / - > '1' 1
Copy the code

For the above code you may be confused, here explains the principle

  • The first functionbExecute first, and then execute toawait 10Before the variableaIt’s still 0 because it’s inawaitInternally implementedgeneratorsgeneratorsIt’s going to keep things on the stack, so at this pointa = 0It was preserved
  • becauseawaitThis is an asynchronous operation, encounteredawaitWill immediately return onependingThe state of thePromiseObject that temporarily returns control of the executing code, allowing code outside the function to continue execution, so it executes firstconsole.log('1', a)
  • At this point, the synchronous code is finished, and the asynchronous code is executed, and the saved value is useda = 10
  • Then there’s the normal execution code

Principle of the generator

Generator is a new syntax in ES6 and, like Promise, can be used for asynchronous programming

// Use * to indicate that this is a Generator function
// Internally, the code can be suspended by yield
// Restore execution by calling next
function* test() {
  let a = 1 + 2;
  yield 2;
  yield 3;
}
let b = test();
console.log(b.next()); // > { value: 2, done: false }
console.log(b.next()); // > { value: 3, done: false }
console.log(b.next()); // > { value: undefined, done: true }
Copy the code

From the above code, we can see that the function with * has the next function, which means that the function returns an object after execution. Each call to the next function continues the suspended code. Here is a simple implementation of the Generator function

// cb is the compiled test function
function generator(cb) {
  return (function() {
    var object = {
      next: 0.stop: function() {}};return {
      next: function() {
        var ret = cb(object);
        if (ret === undefined) return { value: undefined.done: true };
        return {
          value: ret,
          done: false
        };
      }
    };
  })();
}
// If you compile with Babel, you can see that the test function looks like this
function test() {
  var a;
  return generator(function(_context) {
    while (1) {
      switch ((_context.prev = _context.next)) {
        // It can be found that the code is divided into several pieces by yield
        // Execute a block of code each time the next function is executed
        // And indicate which block of code needs to be executed next
        case 0:
          a = 1 + 2;
          _context.next = 4;
          return 2;
        case 4:
          _context.next = 6;
          return 3;
		// Finish
        case 6:
        case "end":
          return_context.stop(); }}}); }Copy the code

Promise

Promise is a new syntax for ES6 that solves the problem of callback hell.

Think of a Promise as a state machine. The resolve and Reject functions change the pending state to resolved or Rejected. Once the state changes, it cannot be changed again.

The then function returns a Promise instance, and the return value is a new instance rather than the previous one. Since the Promise specification states that no state except pending can be changed, multiple THEN calls are meaningless if the same instance is returned.

For then, you can essentially think of it as a flatMap

How to implement a Promise

// Three states
const PENDING = "pending";
const RESOLVED = "resolved";
const REJECTED = "rejected";
// Promise receives an argument to a function that executes immediately
function MyPromise(fn) {
  let _this = this;
  _this.currentState = PENDING;
  _this.value = undefined;
  // To save the callback in then, only when the promise
  // It is cached only when the state is pending, and at most one per instance is cached
  _this.resolvedCallbacks = [];
  _this.rejectedCallbacks = [];

  _this.resolve = function (value) {
    if (value instanceof MyPromise) {
      // If value is a Promise, execute recursively
      return value.then(_this.resolve, _this.reject)
    }
    setTimeout((a)= > { // Execute asynchronously to ensure the order of execution
      if (_this.currentState === PENDING) {
        _this.currentState = RESOLVED;
        _this.value = value;
        _this.resolvedCallbacks.forEach(cb= >cb()); }})}; _this.reject =function (reason) {
    setTimeout((a)= > { // Execute asynchronously to ensure the order of execution
      if (_this.currentState === PENDING) {
        _this.currentState = REJECTED;
        _this.value = reason;
        _this.rejectedCallbacks.forEach(cb= >cb()); }})}// To solve the following problems
  // new Promise(() => throw Error('error))
  try {
    fn(_this.resolve, _this.reject);
  } catch (e) {
    _this.reject(e);
  }
}

MyPromise.prototype.then = function (onResolved, onRejected) {
  var self = this;
  // Specification 2.2.7, then must return a new promise
  var promise2;
  // Specification 2.2. OnResolved and onRejected are both optional parameters
  // If the type is not a function, it needs to be ignored
  // Promise.resolve(4).then().then((value) => console.log(value))
  onResolved = typeof onResolved === 'function' ? onResolved : v= > v;
  onRejected = typeof onRejected === 'function' ? onRejected : r= > throw r;

  if (self.currentState === RESOLVED) {
    return (promise2 = new MyPromise(function (resolve, reject) {
      // This is fulfilled by onRjected
      // So use the setTimeout wrapper
      setTimeout(function () {
        try {
          var x = onResolved(self.value);
          resolutionProcedure(promise2, x, resolve, reject);
        } catch(reason) { reject(reason); }}); })); }if (self.currentState === REJECTED) {
    return (promise2 = new MyPromise(function (resolve, reject) {
      setTimeout(function () {
        // Execute asynchronously onRejected
        try {
          var x = onRejected(self.value);
          resolutionProcedure(promise2, x, resolve, reject);
        } catch(reason) { reject(reason); }}); })); }if (self.currentState === PENDING) {
    return (promise2 = new MyPromise(function (resolve, reject) {
      self.resolvedCallbacks.push(function () {
        // Use a try/catch package because of the possibility of an error
        try {
          var x = onResolved(self.value);
          resolutionProcedure(promise2, x, resolve, reject);
        } catch(r) { reject(r); }}); self.rejectedCallbacks.push(function () {
        try {
          var x = onRejected(self.value);
          resolutionProcedure(promise2, x, resolve, reject);
        } catch(r) { reject(r); }}); })); }};/ / specification 2.3
function resolutionProcedure(promise2, x, resolve, reject) {
  // In specification 2.3.1, x cannot be the same as PROMISe2, avoiding circular references
  if (promise2 === x) {
    return reject(new TypeError("Error"));
  }
  / / specification 2.3.2
  // If x is Promise, the state is pending
  if (x instanceof MyPromise) {
    if (x.currentState === PENDING) {
      x.then(function (value) {
        // Call the function again to confirm x resolve
        // What type is the parameter? If it is a basic type, resolve again
        // Pass the value to the next then
        resolutionProcedure(promise2, value, resolve, reject);
      }, reject);
    } else {
      x.then(resolve, reject);
    }
    return;
  }
  / / specification 2.3.3.3.3
  // Reject or resolve one of them, ignore the others
  let called = false;
  // Check whether x is an object or a function
  if(x ! = =null && (typeof x === "object" || typeof x === "function")) {
    // Reject specification 2.3.3.2, reject if the THEN cannot be removed
    try {
      2.3.3.1 / / specification
      let then = x.then;
      // If then is a function, call x.hen
      if (typeof then === "function") {
        2.3.3.3 / / specification
        then.call(
          x,
          y => {
            if (called) return;
            called = true;
            / / specification 2.3.3.3.1
            resolutionProcedure(promise2, y, resolve, reject);
          },
          e => {
            if (called) return;
            called = true; reject(e); }); }else {
        2.3.3.4 / / specificationresolve(x); }}catch (e) {
      if (called) return;
      called = true; reject(e); }}else {
    // Specification 2.3.4, x is the basic typeresolve(x); }}Copy the code

The difference between == and ===, where is == used

ToPrimitive in the figure above is the object transfer base type.

[] ==! [] // -> true

// [] convert to true and then convert to false[] = =false
// According to Article 8
[] == ToNumber(false) [] = =0
// According to Article 10
ToPrimitive([]) == 0
// [].toString() -> ''
' '= =0
// According to Article 6
0= =0 // -> true
Copy the code

=== Checks whether the type and value are the same. In development, the code returned by the back end can be judged by ==.

The garbage collection

V8 realizes accurate GC, and GC algorithm adopts generational garbage collection mechanism. Therefore, V8 divides memory (the heap) into young and old.

Cenozoic algorithm

Exploiture GC algorithm is used to avenge an object in the Cenozoic.

In the new generation space, the memory space is divided into two parts, which are From space and To space. Of these two Spaces, one must be used and the other free. The newly allocated objects are put into the From space, and when the From space is full, the new generation GC starts. The algorithm checks for surviving objects in the From space and copies them into the To space, and destroys inanimate objects. When replication is complete, the From space and To space are interchanged, and GC is finished.

Old generation algorithm

The objects in the old generation generally have a long life and a large number of objects. Two algorithms are used, namely, the marker clearing algorithm and the marker compression algorithm.

Before we talk about algorithms, let’s talk about what happens when an object appears in an old generation space:

  • Be insane. If an object in the new generation has experienced the Scavenge algorithm, it will move the object from the new generation space to the old generation space.
  • The size of objects in the To space exceeds 25%. In this case, objects are moved from the new generation space to the old generation space in order not to affect memory allocation.

The space in the old generation is very complex, there are several Spaces as follows

enum AllocationSpace {
  // TODO(v8:7464): Actually map this space's memory as read-only.
  RO_SPACE,    // Immutable object space
  NEW_SPACE,   // The space used for the GC replication algorithm
  OLD_SPACE,   // Old generation resident object space
  CODE_SPACE,  // Old code object space
  MAP_SPACE,   // The old map object
  LO_SPACE,    // Old generation large space object
  NEW_LO_SPACE,  // New generation of large space objects

  FIRST_SPACE = RO_SPACE,
  LAST_SPACE = NEW_LO_SPACE,
  FIRST_GROWABLE_PAGED_SPACE = OLD_SPACE,
  LAST_GROWABLE_PAGED_SPACE = MAP_SPACE
};
Copy the code

In the old generation, the marker clearing algorithm is started first in the following cases:

  • When a space is not partitioned
  • The number of objects in the space exceeds a certain limit
  • Space does not guarantee that objects in the new generation will move into the old generation

In this phase, all objects in the heap are traversed, live objects are marked, and all unmarked objects are destroyed when the marking is complete. When marking large pairs of memory, it can take hundreds of milliseconds to complete a single marking. This can lead to some performance problems. To address this, in 2011, V8 switched from the stop-the-world flag to the incremental flag. During incremental markup, GC breaks down the markup work into smaller modules, allowing THE JS application logic to execute for a while between modules without causing the application to stop. But in 2018, THERE was another major breakthrough in GC technology, called concurrent tagging. This technique allows GC to scan and tag objects while allowing JS to run, which you can read more about by clicking on this blog.

Clearing objects will cause heap memory fragmentation, when fragmentation exceeds a certain limit will start compression algorithm. During compression, live objects are moved from one end to the other until all the objects are moved and then the memory is cleared.

closure

The definition of A closure is simple: function A returns A function B that uses A variable of function A, and function B is called A closure.

function A() {
  let a = 1
  function B() {
      console.log(a)
  }
  return B
}
Copy the code

If you’re wondering why function A is already on the call stack, why function B can still refer to A variable in function A. Because the variables in function A are stored on the heap at this point. Today’s JS engines can use escape analysis to identify which variables need to be stored on the heap and which need to be stored on the stack.

Use closures in a loop to solve the problem of var defining functions

for ( var i=1; i<=5; i++) {
	setTimeout( function timer() {
		console.log( i );
	}, i*1000 );
}
Copy the code

First of all, because setTimeout is an asynchronous function, it’s going to run through the loop first, and then I is going to be 6, so it’s going to print a bunch of 6’s.

There are two solutions, the first using closures

for (var i = 1; i <= 5; i++) {
  (function(j) {
    setTimeout(function timer() {
      console.log(j);
    }, j * 1000);
  })(i);
}
Copy the code

The second is to use the third argument of setTimeout

for ( var i=1; i<=5; i++) {
	setTimeout( function timer(j) {
		console.log( j );
	}, i*1000, i);
}
Copy the code

The third way is to use let to define I

for ( let i=1; i<=5; i++) {
	setTimeout( function timer() {
		console.log( i );
	}, i*1000 );
}
Copy the code

Because for a let, it creates a block-level scope, equivalent to

{ // Form a block-level scope
  let i = 0
  {
    let ii = i
    setTimeout( function timer() {
        console.log( ii );
    }, i*1000 );
  }
  i++
  {
    let ii = i
  }
  i++
  {
    let ii = i
  }
  ...
}
Copy the code

The storage difference between basic data types and reference types

The former is stored on the stack and the latter on the heap

What’s the difference between Browser Eventloop and Node

It is well known that JS is a non-blocking single-threaded language, since it was originally created to interact with browsers. If JS is a multithreaded language, we might have problems dealing with DOM in multiple threads (adding nodes in one thread and removing nodes in another), but we can introduce read/write locks to solve this problem.

JS will generate execution environments during execution, which will be added to the execution stack sequentially. If asynchronous code is encountered, it is suspended and added to the Task queue (there are several tasks). Once the execution stack is empty, the Event Loop will take out the code to be executed from the Task queue and put it into the execution stack for execution, so essentially the asynchronous or synchronous behavior in JS.

console.log('script start');

setTimeout(function() {
  console.log('setTimeout');
}, 0);

console.log('script end');
Copy the code

The above code is asynchronous even though the setTimeout delay is 0. This is because the HTML5 standard states that the second argument to this function must be no less than 4 milliseconds, which is automatically incrementing. So setTimeout will still print after script end.

Different Task sources are allocated to different Task queues. Task sources can be divided into microTask and macroTask. In the ES6 specification, microTasks are called Jobs and macroTasks are called Tasks.

console.log('script start');

setTimeout(function() {
  console.log('setTimeout');
}, 0);

new Promise((resolve) = > {
    console.log('Promise')
    resolve()
}).then(function() {
  console.log('promise1');
}).then(function() {
  console.log('promise2');
});

console.log('script end');
// script start => Promise => script end => promise1 => promise2 => setTimeout
Copy the code

The setTimeout is written before the Promise, but since the Promise is a microtask and the setTimeout is a macro task, it will print.

Microtasks include process.nextTick, Promise, object. observe, and MutationObserver

Macro tasks include Script, setTimeout, setInterval, setImmediate, I/O, and UI Rendering

There is a misconception that microtasks are faster than macro tasks, but this is not true. Since a macro task includes a script, the browser will execute a macro task first and then execute the microtask if there is any asynchronous code.

So the correct sequence of an Event loop would look like this

  1. Execute the synchronization code, which is a macro task
  2. If the execution stack is empty, query whether microtasks need to be executed
  3. Perform all microtasks
  4. Render the UI if necessary
  5. The next Event loop is started, executing the asynchronous code in the macro task

According to the above Event loop order, if the asynchronous code in the macro task has a large number of calculations and needs to manipulate the DOM, we can put the manipulation DOM into the microtask for faster interface response.

Event loop in Node

The Event loop in Node is different from that in the browser.

Node’s Event loop is divided into six stages that are run repeatedly in sequence

┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ┌ ─ > │ timers │ │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ │ │ I/O callbacks │ │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ │ │ idle, Prepare │ │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ │ incoming: │ │ │ poll │ < ─ ─ connections ─ ─ ─ │ │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │ data, Etc. │ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │ │ check │ │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ └ ─ ─ ┤ close callbacks │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘Copy the code

timer

The timers phase executes setTimeout and setInterval

The time specified by a timer is not the exact time; instead, the callback is executed as soon as possible after that time, which may be delayed because the system is executing other transactions.

The lower limit has a range of time: [1, 2147483647]. If the time set is not in this range, it will be set to 1.

**I/O **

The I/O phase performs callbacks except for the close event, timer, and setImmediate

idle, prepare

Idle, prepare phase internal implementation

poll

The poll phase is important, because in this phase, the system is doing two things

  1. Execute the to-point timer
  2. Execute events in the poll queue

And when there’s no timer in the poll, two things happen

  • If the poll queue is not empty, the callback queue is traversed and executed synchronously until the queue is empty or the system is limited
  • If the poll queue is empty, two things happen
    • If you havesetImmediateIf you need to execute, the poll phase will stop and go to the check phasesetImmediate
    • If there is nosetImmediateNeeds to execute, waits for the callback to be queued and executes the callback immediately

If another timer needs to be executed, the callback is performed back in the timer phase.

check

The check phase performs setImmediate

close callbacks

The Close Callbacks phase executes the close event

And in Node, in some cases the timer execution sequence is random

setTimeout((a)= > {
    console.log('setTimeout');
}, 0);
setImmediate((a)= > {
    console.log('setImmediate');
})
// This might output setTimeout, setImmediate
// May also output the opposite, depending on performance
// Because it might take less than a millisecond to get to the event loop, setImmediate is performed
// Otherwise, setTimeout is executed
Copy the code

In this case, of course, the order of execution is the same

var fs = require('fs')

fs.readFile(__filename, () => {
    setTimeout((a)= > {
        console.log('timeout');
    }, 0);
    setImmediate((a)= > {
        console.log('immediate');
    });
});
// Because the callback to readFile is performed in poll
// setImmediate is discovered, so it immediately jumps to the check phase to perform a callback
// Run setTimeout in the timer phase
// So the output must be setImmediate, setTimeout
Copy the code

Macrotask execution is described above, and microTask is executed immediately after each of these phases.

setTimeout((a)= >{
    console.log('timer1')

    Promise.resolve().then(function() {
        console.log('promise1')})},0)

setTimeout((a)= >{
    console.log('timer2')

    Promise.resolve().then(function() {
        console.log('promise2')})},0)

// The above code prints differently in the browser and node
// The browser must print timer1, timer2, timer2, and timer2
// Node may print timer1, timer2, promise1, promise2
// It is also possible to print timer1, promise1, timer2, and promise2
Copy the code

Process. nextTick in Node executes before any other microTask.

setTimeout((a)= > {
  console.log("timer1");

  Promise.resolve().then(function() {
    console.log("promise1");
  });
}, 0);

process.nextTick((a)= > {
  console.log("nextTick");
});
// nextTick, timer1, promise1
Copy the code

SetTimeout countdown error

JS is single-threaded, so setTimeout errors cannot be completely resolved for a number of reasons, either in the callback or by various events in the browser. This is why the page open for a long time, the timer will not be accurate, of course, we can reduce this error through certain methods.

The following is a relatively prepared countdown implementation

var period = 60 * 1000 * 60 * 2
var startTime = new Date().getTime();
var count = 0
var end = new Date().getTime() + period
var interval = 1000
var currentInterval = interval

function loop() {
  count++
  var offset = new Date().getTime() - (startTime + count * interval); // Elapsed time for code execution
  var diff = end - new Date().getTime()
  var h = Math.floor(diff / (60 * 1000 * 60))
  var hdiff = diff % (60 * 1000 * 60)
  var m = Math.floor(hdiff / (60 * 1000))
  var mdiff = hdiff % (60 * 1000)
  var s = mdiff / (1000)
  var sCeil = Math.ceil(s)
  var sFloor = Math.floor(s)
  currentInterval = interval - offset // Get the time for the next loop
  console.log(':'+h, ':'+m, '毫秒:'+s, 'Round up second:'+sCeil, Code execution time:+offset, 'Next cycle interval'+currentInterval) // Print the next loop interval when the minute code executes time

  setTimeout(loop, currentInterval)
}

setTimeout(loop, currentInterval)
Copy the code

Image stabilization

Have you ever run into a problem in daily development where you need to do a complicated calculation in a scroll event or implement a button to prevent a second click?

These requirements can be met through function jitter prevention. Especially for the first requirement, if you do complex calculations in frequent event callbacks, it is very likely that the page will stall. It is better to combine multiple calculations into a single calculation and only do the operation at a precise point.

PS: The function of damping and throttling is to prevent multiple calls to the function. The difference is that, assuming that a user fires the function all the time at intervals smaller than wait, the function is called once in the case of shaking stabilization and every other time (wait) in the case of throttling.

Let’s first look at a pocket version of the stabilization to understand the implementation of the stabilization:

// func is a function passed in by the user that needs to be shaken
// Wait is the time to wait
const debounce = (func, wait = 50) = > {
  // Cache a timer ID
  let timer = 0
  // The function returned here is the anti-shaking function that the user actually calls each time
  // Delete the previous timer if it has already been set
  // Start a new timer that delays the execution of the method passed in by the user
  return function(. args) {
    if (timer) clearTimeout(timer)
    timer = setTimeout((a)= > {
      func.apply(this, args)
    }, wait)
  }
}
// If the interval between the user calling the function is less than wait, the last time will be cleared and the function will not be executed
Copy the code

This is a simple version of the stabilization, but the flaw is that the stabilization can only be called at the end. In general, the immediate option indicates whether to invoke immediately. The difference between the two, for example:

  • For example, when the search engine searches for a question, we certainly want the user to type the last word before calling the query interface. This is applicableDelay theWhich is always called after a series of functions (less spaced than wait) have fired.
  • For example, when the user points star to interviewMap, we expect the user to call the interface when he clicks the first click, and change the appearance of the Star button after success, so that the user can immediately get feedback whether the star is successful. This case appliesExecuted immediately, which is always called the first time, and the next call must be longer than wait.

Let’s implement a stabilization function with the immediate execution option

// This is used to get the current timestamp
function now() {
  return +new Date()}/** * wait (); /** * wait (); * * @param {function} func callback * @param {number} wait * @param {Boolean} immediate set to true Whether to call the function immediately * @return {function} Returns the client calling function */
function debounce (func, wait = 50, immediate = true) {
  let timer, context, args
  
  // Delay the execution function
  const later = (a)= > setTimeout((a)= > {
    // After the function finishes executing, clear the timer sequence of the cache
    timer = null
    // In the case of delayed execution, the function is executed in the delay function
    // Use the previously cached parameters and context
    if(! immediate) { func.apply(context, args) context = args =null
    }
  }, wait)

  // The function returned here is the function that was actually called each time
  return function(. params) {
    // If no later function has been created, create one
    if(! timer) { timer = later()// If it is executed immediately, call the function
      // Otherwise cache the parameters and the invocation context
      if (immediate) {
        func.apply(this, params)
      } else {
        context = this
        args = params
      }
    // If there is a later function, the call clears the original one and resets it
    // Doing so causes the delay function to be timed again
    } else {
      clearTimeout(timer)
      timer = later()
    }
  }
}
Copy the code

The overall function is not difficult to implement, so let me summarize.

  • Implementation for button click-proof: if the function is executed immediately, it is called immediately. If the function is delayed, the context and parameters are cached and executed in the delayed function. Once I start a timer, as long as I have my timer, EVERY time you click I’m going to start the timer again. Once you get tired and the timer is up, the timer resets tonull, you can click again.
  • Implementation for delayed execution functions: clear the timer ID and call the function if the call is delayed

The array dimension reduction

[1[2].3].flatMap((v) = > v + 1)
// -> [2, 3, 4]
Copy the code

If you want to completely reduce the dimension of a multidimensional array, you can do this

const flattenDeep = (arr) = > Array.isArray(arr)
  ? arr.reduce( (a, b) = > [...a, ...flattenDeep(b)] , [])
  : [arr]

flattenDeep([1The [[2], [3[4]], 5]])
Copy the code

Deep copy

This problem can usually be solved with json.parse (json.stringify (object)).

let a = {
    age: 1.jobs: {
        first: 'FE'}}let b = JSON.parse(JSON.stringify(a))
a.jobs.first = 'native'
console.log(b.jobs.first) // FE
Copy the code

But there are limitations to this approach:

  • ignoreundefined
  • ignoresymbol
  • Cannot serialize function
  • Objects with circular references cannot be resolved
let obj = {
  a: 1.b: {
    c: 2.d: 3,
  },
}
obj.c = obj.b
obj.e = obj.a
obj.b.c = obj.c
obj.b.d = obj.b
obj.b.e = obj.b.c
let newObj = JSON.parse(JSON.stringify(obj))
console.log(newObj)
Copy the code

If you have such a circular reference object, you will find that you cannot deep copy through this method

The object cannot be serialized properly when a function, undefined, or symbol is encountered

let a = {
    age: undefined.sex: Symbol('male'),
    jobs: function() {},
    name: 'yck'
}
let b = JSON.parse(JSON.stringify(a))
console.log(b) // {name: "yck"}
Copy the code

You’ll notice that in the above case, the method ignores functions and undefined.

But in general, complex data can be serialized, so this function solves most of the problems, and it’s the fastest built-in function for deep copy. Of course, if your data contains all three of these, you can use lodash’s deep-copy functions.

If the object you want to copy has built-in types and does not contain functions, you can use MessageChannel

function structuralClone(obj) {
  return new Promise(resolve= > {
    const {port1, port2} = new MessageChannel();
    port2.onmessage = ev= > resolve(ev.data);
    port1.postMessage(obj);
  });
}

var obj = {a: 1.b: {
    c: b
}}
// Note that this method is asynchronous
// Can handle undefined and circular reference objects
(async() = > {const clone = await structuralClone(obj)
})()
Copy the code

Typeof is different from instanceof

For basic types, the correct type can be displayed except for null

typeof 1 // 'number'
typeof '1' // 'string'
typeof undefined // 'undefined'
typeof true // 'boolean'
typeof Symbol(a)// 'symbol'
typeof b // b is not declared, but undefined is also displayed
Copy the code

Typeof displays objects except functions

typeof [] // 'object'
typeof {} // 'object'
typeof console.log // 'function'
Copy the code

Null displays object even though it is a primitive type, which is a long-standing Bug

typeof null // 'object'
Copy the code

PS: Why does this happen? In the original version of JS, the 32-bit system was used. For performance purposes, the type information of variables was stored in low order. The beginning of 000 represents an object, whereas null represents all zeros, so it was wrongly identified as object. Although the current internal type determination code has changed, this Bug has persisted.

Instanceof can correctly determine the type of the object, because the internal mechanism is to determine whether the prototype of the type can be found in the prototype chain of the object.

We can also try to implement instanceof

function instanceof(left, right) {
    // Get the prototype of the type
    let prototype = right.prototype
    // Get the prototype of the object
    left = left.__proto__
    // Determine whether the type of the object is equal to the stereotype of the type
    while (true) {
    	if (left === null)
    		return false
    	if (prototype === left)
    		return true
    	left = left.__proto__
    }
}
Copy the code

Browser-related

Cookie and localSrorage, session, indexDB difference

features cookie localStorage sessionStorage indexDB
Data lifecycle Usually generated by the server, you can set the expiration time It’s always there until it’s cleaned up Clean up when the page is closed It’s always there until it’s cleaned up
Data store size 4K 5M 5M infinite
Communicate with the server It is carried in the header each time and has an impact on request performance Don’t participate in Don’t participate in Don’t participate in

As you can see from the above table, cookies are no longer recommended for storage. LocalStorage and sessionStorage can be used if you do not have large data storage requirements. For data that does not change much, try to use localStorage, otherwise use sessionStorage.

With cookies, we also need to pay attention to security.

attribute role
value If the value is used to save the user login state, it should be encrypted and the plaintext user ID cannot be used
http-only Cookie cannot be accessed through JS, reducing XSS attacks
secure It can only be carried in HTTPS requests
same-site Specify that browsers cannot carry cookies in cross-domain requests to reduce CSRF attacks

How do I know if a page has loaded?

When the Load event is triggered, the DOM, CSS, JS, and images in the page have all been loaded.

The DOMContentLoaded event is triggered to mean that the original HTML is fully loaded and parsed, without waiting for CSS, JS, or images to load.

How to solve cross-domain problems

Because browsers have the same origin policy for security reasons. That is, if one of the protocol, domain name, or port differences is cross-domain, the Ajax request will fail.

There are several common ways to solve cross-domain problems

JSONP

The principle of JSONP is simply to take advantage of the fact that

<script src="http://domain/api? param1=a&param2=b&callback=jsonp"></script> <script> function jsonp(data) { console.log(data) } </script>Copy the code

JSONP is simple to use and compatible, but it is limited to GET requests.

In development, you may encounter multiple JSONP requests with the same callback name. In this case, you need to encapsulate a JSONP. Here is a simple implementation

function jsonp(url, jsonpCallback, success) {
  let script = document.createElement("script");
  script.src = url;
  script.async = true;
  script.type = "text/javascript";
  window[jsonpCallback] = function(data) {
    success && success(data);
  };
  document.body.appendChild(script);
}
jsonp(
  "http://xxx"."callback".function(value) {
    console.log(value); });Copy the code

CORS

CORS requires both browser and back-end support. IE 8 and 9 require XDomainRequest to implement.

The browser will automatically carry out CORS communication, the key to realize CORS communication is the back end. As long as the CORS are implemented at the back end, cross-domain is implemented.

Set access-control-allow-Origin on the server to enable CORS. This property indicates which domain names can access the resource. If the wildcard character is set, all websites can access the resource.

document.domain

This mode can be used only when the same secondary domain name is used. For example, a.test.com and B.test.com are applicable to this mode.

You can cross domains simply by adding document.domain = ‘test.com’ to the page to indicate that the second level domain is the same

postMessage

This approach is typically used to get third-party page data embedded in the page. One page sends the message, and the other determines the source and receives the message

// Send the message
window.parent.postMessage('message'.'http://test.com');
// Receive the message
var mc = new MessageChannel();
mc.addEventListener('message', (event) => {
    var origin = event.origin || event.originalEvent.origin; 
    if (origin === 'http://test.com') {
        console.log('Verified')}});Copy the code

What is an event proxy

If the child nodes in a node are dynamically generated, then the child node should register events with the parent node

<ul id="ul">
	<li>1</li>
    <li>2</li>
	<li>3</li>
	<li>4</li>
	<li>5</li>
</ul>
<script>
	let ul = document.querySelector('#ul')
	ul.addEventListener('click', (event) => {
		console.log(event.target);
	})
</script>
Copy the code

The event broker approach has the following advantages over registering events directly to the target

  • Save memory
  • You do not need to deregister events for child nodes

Service worker

Service Workers essentially act as a proxy server between a Web application and a browser, and can also act as a proxy between a browser and a network when the network is available. They are intended, among other things, to enable the creation of an effective offline experience that intercepts network requests and takes appropriate actions based on whether the network is available and whether updated resources reside on the server. They also allow access to push notifications and backend synchronization apis.

At present, this technology is usually used to cache files and improve the first screen speed, so you can try to achieve this function.

// index.js
if (navigator.serviceWorker) {
  navigator.serviceWorker
    .register("sw.js")
    .then(function(registration) {
      console.log("Service worker registered successfully");
    })
    .catch(function(err) {
      console.log("Servcie worker registration failed");
    });
}
// sw.js
// Listen for 'install' events and cache the required files in the callback
self.addEventListener("install", e => {
  e.waitUntil(
    caches.open("my-cache").then(function(cache) {
      return cache.addAll(["./index.html"."./index.js"]); })); });// Intercept all request events
// If the requested data is already in the cache, use the cache, otherwise request the data
self.addEventListener("fetch", e => {
  e.respondWith(
    caches.match(e.request).then(function(response) {
      if (response) {
        return response;
      }
      console.log("fetch source"); })); });Copy the code

Open the page and you can see that the Service Worker has started in the Application in developer tools

Browser cache

Caching is a very important point for front-end performance optimization. A good caching strategy can reduce the repeated loading of resources and improve the overall loading speed of web pages.

There are usually two types of browser caching policies: strong caching and negotiated caching.

Strong cache

Strong caching can be implemented with two types of response headers: Expires and cache-control. Strong cache means that no request is required during caching, and state code is 200

Expires: Wed, 22 Oct 2018 08:41:00 GMT
Copy the code

Expires is an ARTIFACT of HTTP / 1.0 that indicates that a resource will expire after Wed, 22 Oct 2018 08:41:00 GMT and needs to be requested again. Also, Expires is limited to local time, and changing the local time may invalidate the cache.

Cache-control: max-age=30
Copy the code

Cache-control occurs in HTTP / 1.1 and has a higher priority than Expires. This property indicates that the resource will expire after 30 seconds and needs to be requested again.

Negotiate the cache

If the cache expires, we can use the negotiated cache to solve the problem. The negotiation cache requires a request and returns 304 if the cache is valid.

Negotiated caching requires both client and server implementations and, like strong caching, can be implemented in two ways.

The last-modified and If – Modified – Since

Last-modified indicates the Last time the local file was Modified. If-modified-since sends the last-modified value to the server and asks If the server has updated the resource Since that date. If so, it sends the new resource back.

However, if you open the cached file locally, the last-modified file will be changed, so the ETag appears in HTTP / 1.1.

The ETag and If – None – Match

ETag is similar to a file fingerprint. If-none-match sends the current ETag to the server, asks If the resource ETag has changed, and sends back a new resource If it has. The ETag priority is higher than last-Modified.

Select an appropriate caching policy

For most scenarios it is possible to use a strong cache in conjunction with a negotiated cache, but in some specific cases you may need to select a special cache strategy

  • For some resources that do not require caching, you can useCache-control: no-store, indicating that the resource does not need to be cached
  • For frequently changing resources, you can useCache-Control: no-cacheAnd comply withETagIs used to indicate that the resource has been cached, but a request is sent each time to ask if the resource has been updated.
  • For code files, it is commonly usedCache-Control: max-age=31536000It is used in conjunction with the policy cache, and then the file is fingerprinted, and the new file is downloaded as soon as the file name changes.

Browser performance Issues

Repaint and Reflow

Redraw and backflow are a section of the render step, but these two steps have a significant impact on performance.

  • Redraw is when a node needs to change its appearance without affecting the layout, such as changecolorIt’s called a redraw
  • Backflow is a layout or geometric property that needs to be changed and is called backflow.

Backflow must result in redrawing, but redrawing does not necessarily cause backflow. The cost ratio of backflow is much higher, and changing the deep node is likely to lead to a series of backflow of the parent node.

So the following actions can cause performance problems:

  • Change the window size
  • Change the font
  • Add or remove styles
  • Text change
  • Positioning or floating
  • The box model

What many people don’t know is that redrawing and backflow are actually related to Event loops.

  1. When the Event Loop finishes executing Microtasks, it determines whether the Document needs to be updated. Because the browser has a 60Hz refresh rate, it only updates every 16ms.
  2. And see if there isresizeorscrollIf there is, it will trigger an event, soresizescrollThe event is also triggered at least 16ms and has its own throttling function.
  3. Determine whether the Media Query is triggered
  4. Update the animation and send the event
  5. Check whether full-screen operation events exist
  6. performrequestAnimationFrameThe callback
  7. performIntersectionObserverCallback, which determines whether an element is visible, can be used on lazy loads, but is not compatible
  8. Update the interface
  9. So that’s what you might do in one frame. If there is idle time in a frame, it will executerequestIdleCallbackThe callback.

The above is from the HTML document

Reduce redrawing and backflow
  • Use translate instead of top

    <div class="test"></div>
    <style>
    	.test {
    		position: absolute;
    		top: 10px;
    		width: 100px;
    		height: 100px;
    		background: red;
    	}
    </style>
    <script>
    	setTimeout((a)= > {
            // Cause reflux
    		document.querySelector('.test').style.top = '100px'
    	}, 1000)
    </script>
    Copy the code
  • Use visibility instead of display: None, as the former will only cause a redraw and the latter will cause backflow (layout changes)

  • Take the DOM offline and change it. For example, give it display: None (once Reflow) and then change it 100 times before displaying it

  • Do not put property values of DOM nodes in a loop as variables in the loop

    for(let i = 0; i < 1000; i++) {
        // Getting offsetTop causes reflux because you need to get the correct value
        console.log(document.querySelector('.test').style.offsetTop)
    }
    Copy the code
  • Do not use a table layout. A small change may cause the entire table to be rearranged

  • Select the speed of the animation implementation, the faster the animation speed, the more times of backflow, you can also choose to use requestAnimationFrame

  • CSS selectors match search from right to left to avoid deep DOM

  • Turns a frequently running animation into a layer that prevents the node from backflow and affecting other elements. For example, in the case of the video TAB, the browser automatically turns the node into a layer.

Image optimization

Calculate the image size

For a 100 * 100 pixel image, there are 10,000 pixels on the image. If the value of each pixel is RGBA stored, that is, each pixel has 4 channels, each channel has 1 byte (8 bits = 1 byte). So the image size is about 39KB (10000 * 1 * 4/1024).

But in a real project, an image may not need that many colors to display, and we can reduce the size of the image by reducing the color palette of each pixel.

Understand how to calculate the size of the knowledge, so for how to optimize the image, presumably you have two ideas:

  • Reduce pixels
  • Reduce the number of colors each pixel can display
Image loading optimization
  1. No pictures. Many times will use a lot of decorated pictures, in fact, this kind of decorated pictures can be used to replace CSS.
  2. For mobile, the screen width is so small that there is no need to load the original image and waste bandwidth. General pictures are loaded with CDN, you can calculate the width of the adaptation screen, and then to request the corresponding cropped pictures.
  3. The miniatures are in Base64 format
  4. Combine multiple icon files into one image (Sprite Chart)
  5. Select the correct image format:
    • Try to use the WebP format for browsers that can display it. Because WebP format has a better image data compression algorithm, can bring a smaller image volume, and has the visual recognition of the same image quality, the disadvantage is not good compatibility
    • Small images use PNG, in fact, for most ICONS and other images, you can use SVG instead
    • Use JPEG for photos

Other file optimizations

  • CSS files inhead
  • The file compression function is enabled on the server
  • willscriptThe label on thebodyBottom, because JS file execution will block rendering. Or, of course, you canscriptI’m going to put the tag anywhere and I’m going to adddefer, which means that the file will be downloaded in parallel, but will be executed sequentially after the HTML parsing is complete. For JS files that do not have any dependencies can be addedasync, indicating that the loading and rendering of subsequent document elements will be carried out in parallel with the loading and execution of JS files.
  • Javascript code that takes too long to execute will block the rendering, so consider using it for code that requires a lot of time to calculateWebworker.WebworkerThis allows us to open another thread to execute the script without affecting the rendering.

CDN

Static resources are loaded using CDN as much as possible. Since the browser has an upper limit of concurrent requests for a single domain name, multiple CDN domain names can be considered. For the CDN to load static resources, it is necessary to pay attention to the CDN domain name to be different from the master site, otherwise each request will bring the master site’s Cookie.

Optimize your project with Webpack

  • For Webpack4, package projects in Production mode, which automatically turns on code compression
  • Use the ES6 module to turn on Tree Shaking, a technique that removes unused code
  • Optimize the image, for small images you can use base64 to write to the file
  • Split the code according to the route to achieve on-demand loading
  • Add a hash to the package file name to implement the browser cache file

Webpack

Optimize packing speed

  • Reduce file search scope
    • Like by aliases
    • Test, include & exclude of loader
  • Webpack4 compresses parallel by default
  • The Happypack is concurrently invoked
  • Babel can also be compiled cached

Babel principle

The compiler, essentially, generates an AST when the code is turned into a string, transforms the AST and then generates new code

  • It is divided into three steps: Token is generated by lexical analysis, AST is generated by syntax analysis, AST is traversed, corresponding nodes are transformed according to plug-ins, and AST is transformed into code

How do I implement a plug-in

  • Call the plug-in apply function and pass in the Compiler object
  • Listen for events using the Compiler object

Let’s say you want to implement a plugin that compiles the exit command

class BuildEndPlugin {
  apply (compiler) {
    const afterEmit = (compilation, cb) = > {
      cb()
      setTimeout(function () {
        process.exit(0)},1000)
    }

    compiler.plugin('after-emit', afterEmit)
  }
}

module.exports = BuildEndPlugin
Copy the code

The framework

React lifecycle

The Fiber mechanism was introduced in V16. This mechanism affects partial lifecycle calls to some extent, and two new apis have been introduced to address the problem.

In previous versions, if you had a complex composite component and changed the state of the top component, the call stack could be very long

A long call stack, coupled with complex operations in the middle, can cause the main thread to be blocked for a long time, resulting in a bad user experience. Fiber is designed to solve this problem.

The Fiber is essentially a virtual stack frame, and the new scheduler will freely schedule the frames according to their priorities, thereby changing the previous synchronous rendering to asynchronous rendering, and computing updates in segments without compromising the experience.

React has its own logic about how to prioritize. For animations, which are very real-time, React will pause the update every 16 ms (or less) and return to continue rendering the animation.

For asynchronous rendering, there are two stages: Reconciliation and Commit. The former process can be interrupted, while the latter cannot be paused, and the interface will be updated until finished.

Reconciliation phase

  • componentWillMount
  • componentWillReceiveProps
  • shouldComponentUpdate
  • componentWillUpdate

The Commit phase

  • componentDidMount
  • componentDidUpdate
  • componentWillUnmount

Since the Reconciliation phase can be interrupted, the lifecycle functions that are executed in the Reconciliation phase may be called multiple times, causing bugs. Therefore, for several functions called in the Reconciliation phase, except shouldComponentUpdate, other functions should be avoided to use, and V16 has introduced a new API to solve this problem.

GetDerivedStateFromProps used to replace componentWillReceiveProps, this function can be in initialization and is invoked when the update

class ExampleComponent extends React.Component {
  // Initialize state in constructor,
  // Or with a property initializer.
  state = {};

  static getDerivedStateFromProps(nextProps, prevState) {
    if(prevState.someMirroredValue ! == nextProps.someValue) {return {
        derivedData: computeDerivedState(nextProps),
        someMirroredValue: nextProps.someValue
      };
    }

    // Return null to indicate no change to state.
    return null; }}Copy the code

GetSnapshotBeforeUpdate is used to replace componentWillUpdate, which is called after the DOM update and before the DOM update to read the latest DOM data.

V16 Lifecycle function usage suggestions

class ExampleComponent extends React.Component {
  // To initialize the state
  constructor() {}
  / / used to replace ` componentWillReceiveProps `, this function will be initialized and ` update ` is invoked
  // Because the function is static, it cannot fetch 'this'
  // If you want to compare 'prevProps', you need to maintain it separately from' state '
  static getDerivedStateFromProps(nextProps, prevState) {}
  // Determine whether the component needs to be updated
  shouldComponentUpdate(nextProps, nextState) {}
  // Called after the component is mounted
  // You can request or subscribe from this function
  componentDidMount() {}
  // To get the latest DOM data
  getSnapshotBeforeUpdate() {}
  // The component is about to be destroyed
  // This is where you can remove subscriptions, timers, etc
  componentWillUnmount() {}
  // Called after the component is destroyed
  componentDidUnMount() {}
  // Called after the component is updated
  componentDidUpdate() {}
  // Render component functions
  render() {}
  // The following functions are not recommended
  UNSAFE_componentWillMount() {}
  UNSAFE_componentWillUpdate(nextProps, nextState) {}
  UNSAFE_componentWillReceiveProps(nextProps) {}
}
Copy the code

setState

SetState is a frequently used API in React, but it has some problems and can lead to errors, the core reason being that the API is asynchronous.

First, a setState call does not immediately change the state, and if you call more than one setState at a time, the result may not be what you expect.

handle() {
  // Initialize 'count' to 0
  console.log(this.state.count) / / - > 0
  this.setState({ count: this.state.count + 1 })
  this.setState({ count: this.state.count + 1 })
  this.setState({ count: this.state.count + 1 })
  console.log(this.state.count) / / - > 0
}
Copy the code

First, both prints are 0, because setState is an asynchronous API and only executes when the synchronized code has finished running. The reason setState is asynchronous, I think, is that setState can cause DOM redraw, and if you call it once and then immediately redraw it, then calling it multiple times will cause unnecessary performance penalty. Designed asynchronously, multiple calls can be placed in a queue and the update process can be performed uniformly when appropriate.

Second, even though setState is called three times, count is still 1. Because multiple calls are merged into one, and the state changes only when the update ends, three calls equals the following code

Object.assign(  
  {},
  { count: this.state.count + 1 },
  { count: this.state.count + 1 },
  { count: this.state.count + 1},)Copy the code

Of course, you can do this by calling setState three times so that the count is 3

handle() {
  this.setState((prevState) = > ({ count: prevState.count + 1 }))
  this.setState((prevState) = > ({ count: prevState.count + 1 }))
  this.setState((prevState) = > ({ count: prevState.count + 1}}))Copy the code

If you want to get the correct state after each call to setState, you can do so with the following code

handle() {
    this.setState((prevState) = > ({ count: prevState.count + 1= > {}), ()console.log(this.state)
    })
}
Copy the code

Vue’s nextTick principle

NextTick lets us perform a delayed callback after the next DOM update loop ends to get the updated DOM.

Prior to Vue 2.4, microtasks were used, but the priority of microtasks was too high, which in some cases may be faster than the event bubble, but if macrotasks were used, rendering performance issues may occur. So in the new version, microtasks will be used by default, but macrotasks will be used in special cases, such as V-ON.

To implement Macrotasks, we decide if we can use setImmediate, if not, we can reduce it to MessageChannel, and if all else fails, we can use setTimeout

if (typeofsetImmediate ! = ='undefined' && isNative(setImmediate)) {
  macroTimerFunc = (a)= > {
    setImmediate(flushCallbacks)
  }
} else if (
  typeofMessageChannel ! = ='undefined' &&
  (isNative(MessageChannel) ||
    // PhantomJS
    MessageChannel.toString() === '[object MessageChannelConstructor]')) {const channel = new MessageChannel()
  const port = channel.port2
  channel.port1.onmessage = flushCallbacks
  macroTimerFunc = (a)= > {
    port.postMessage(1)}}else {
  /* istanbul ignore next */
  macroTimerFunc = (a)= > {
    setTimeout(flushCallbacks, 0)}}Copy the code

NextTick also supports the use of promises and will determine if a Promise has been implemented

export function nextTick(cb? : Function, ctx? : Object) {
  let _resolve
  // Combine the callback functions into an array
  callbacks.push((a)= > {
    if (cb) {
      try {
        cb.call(ctx)
      } catch (e) {
        handleError(e, ctx, 'nextTick')}}else if (_resolve) {
      _resolve(ctx)
    }
  })
  if(! pending) { pending =true
    if (useMacroTask) {
      macroTimerFunc()
    } else {
      microTimerFunc()
    }
  }
  // Determine whether a Promise can be used
  // Assign a value to _resolve if possible
  // Then the callback function can be called as a promise
  if(! cb &&typeof Promise! = ='undefined') {
    return new Promise(resolve= > {
      _resolve = resolve
    })
  }
}
Copy the code

Vue Life cycle

Lifecycle functions are hook functions that are fired when a component is initialized or when data is updated.

At initialization, the following code is called, and the lifecycle is called by callHook

Vue.prototype._init = function(options) {
    initLifecycle(vm)
    initEvents(vm)
    initRender(vm)
    callHook(vm, 'beforeCreate') // Can't get props data
    initInjections(vm) 
    initState(vm)
    initProvide(vm)
    callHook(vm, 'created')}Copy the code

In the preceding code, beforeCreate does not get props or data, because the data is initialized in initState.

The mount function is then executed

export function mountComponent {
    callHook(vm, 'beforeMount')
    // ...
    if (vm.$vnode == null) {
        vm._isMounted = true
        callHook(vm, 'mounted')}}Copy the code

BeforeMount is executed before the mount, then creates the VDOM and replaces it with the real DOM, and finally executes the Mounted hook. If new Vue({}) is external, there will be no $vnode, so execute the mounted hook directly. Children, if any, are recursively mounted, and only when all children have been mounted is the root component’s mount hook executed.

Next is the hook function that is called when the data is updated

function flushSchedulerQueue() {
  // ...
  for (index = 0; index < queue.length; index++) {
    watcher = queue[index]
    if (watcher.before) {
      watcher.before() / / call beforeUpdate
    }
    id = watcher.id
    has[id] = null
    watcher.run()
    // in dev build, check and stop circular updates.
    if(process.env.NODE_ENV ! = ='production'&& has[id] ! =null) {
      circular[id] = (circular[id] || 0) + 1
      if (circular[id] > MAX_UPDATE_COUNT) {
        warn(
          'You may have an infinite update loop ' +
            (watcher.user
              ? `in watcher with expression "${watcher.expression}"`
              : `in a component render function.`),
          watcher.vm
        )
        break
      }
    }
  }
  callUpdatedHooks(updatedQueue)
}

function callUpdatedHooks(queue) {
  let i = queue.length
  while (i--) {
    const watcher = queue[i]
    const vm = watcher.vm
    if (vm._watcher === watcher && vm._isMounted) {
      callHook(vm, 'updated')}}}Copy the code

There are two other life-cycles not mentioned in the figure above, namely Activated and deactivated, which are unique to the keep-Alive component. Components wrapped with keep-Alive are not destroyed when switched, but cached in memory and perform a deactivated hook function, and an actived hook function when a cached render is hit.

Finally, it’s time to destroy the component’s hook function

Vue.prototype.$destroy = function() {
  // ...
  callHook(vm, 'beforeDestroy')
  vm._isBeingDestroyed = true
  // remove self from parent
  const parent = vm.$parent
  if(parent && ! parent._isBeingDestroyed && ! vm.$options.abstract) { remove(parent.$children, vm) }// teardown watchers
  if (vm._watcher) {
    vm._watcher.teardown()
  }
  let i = vm._watchers.length
  while (i--) {
    vm._watchers[i].teardown()
  }
  // remove reference from data ob
  // frozen object may not have observer.
  if (vm._data.__ob__) {
    vm._data.__ob__.vmCount--
  }
  // call the last hook...
  vm._isDestroyed = true
  // invoke destroy hooks on current rendered tree
  vm.__patch__(vm._vnode, null)
  // fire destroyed hook
  callHook(vm, 'destroyed')
  // turn off all instance listeners.
  vm.$off()
  // remove __vue__ reference
  if (vm.$el) {
    vm.$el.__vue__ = null
  }
  // release circular reference (#6759)
  if (vm.$vnode) {
    vm.$vnode.parent = null}}Copy the code

The beforeDestroy hook function is called before the destruct operation. The beforeDestroy hook function is called before the destruct operation. The beforeDestroy hook function is called before the destruct operation.

Vue bidirectional binding

  • When we initialize data props, we recurse the object, bi-bound each property, and in the case of an array, we get a prototype rewrite function to manually assign updates. Since the function cannot listen to changes in the data, compare it with the proxy.
  • In addition to the above array functions, changing array data by index or adding new properties to an object does not trigger. You need to use the built-in set function, which also manually distributes updates internally
  • When the component is mounted, the render observer is instantiated, passing in a callback to update the component. During instantiation, value objects in the template are evaluated, triggering dependency collection. The current render observer is saved before triggering a dependency, and is used to restore the parent observer if the component has children. After triggering dependency collection, unneeded dependencies are cleaned up, and performance is optimized to prevent unwanted areas from being rerendered.
  • Changing the value will trigger a dependency update, which will take all the collected dependencies out and put them into the nextTick for unified execution. During execution, the observer will be sorted first and the render will be executed last. The beforeUpdate hook function is executed, followed by the observer callback. During the execution of the callback, it is possible that watch will push in again, because there is an infinite loop in which the value is assigned again in the callback.

V – model principle

  • V: Model converts code when the template is compiled
  • The essence of the V-Model is: Value and V-ON, but with a slight difference. Under the input control, there are two event listeners, and data assignment is triggered only when Chinese is typed
  • When v-model and :bind are used together, the former has a higher priority. If :value is used, conflicts occur
  • V-model can also be used for parent-child communication because of syntax sugar

The difference between Watch and computed and the use of the scene

  • The former is a computed attribute that relies on other attributes to compute values. In addition, the value of computer is cached, and only when the calculated value changes will the change trigger rendering. The latter listens for a worthy change and executes a callback
  • Computer is just a simple calculation for rendering pages. Watch lends itself to some complex business logic
  • The former relies on two watcher, computer watcher and render watcher. The watcher dispatches an update to trigger the render

Vue’s father and son

  • usev-modelFather to son, son to father. By default, v-model resolves to :value and :input
  • The father the son
    • throughprops
    • through$childrenAccess the array of child components. Note that the array is out of order
    • For multilevel parenting, it can be usedv-bind={$attrs}Objects that are passed to the parent component but are not needed by the child component
    • $listens contains the parent scope (not.nativeModifier)v-onEvent listener.
  • Child the parent
    • The parent component passes functions to the child component, and the child component passes them$emitThe trigger
    • Modify the parent componentprops
    • through$parentAccessing the parent component
    • .sync
  • The parallel component
    • EventBus
  • Vuex solves everything

Routing principle

Front-end routing is very simple to implement. It is essentially listening for URL changes, then matching routing rules, and displaying the corresponding page without refreshing. Currently, there are only two implementations of routing for a single page

  • Hash pattern
  • The history mode

www.test.com/#/ is the Hash URL. When the Hash value after # changes, the server will not be asked for data. You can use hashChange event to listen to the CHANGE of THE URL and jump to the page.

History mode is a new HTML5 feature that is much more aesthetically appealing than Hash URLS

MVVM

MVVM consists of the following three elements

  • Interface of the View:
  • Model: data Model
  • ViewModel: Serves as a bridge between View and Model

In the time of JQuery, if you need to refresh the UI, you need to get the corresponding DOM first and then update the UI, so that the data and business logic are strongly coupled with the page.

In MVVM, the UI is data-driven. When the data changes, the corresponding UI is refreshed, and when the UI changes, the corresponding data changes. In this way, the business process can only care about the flow of data, without directly dealing with the page. The ViewModel only cares about data and business processing, not how the View handles the data. In this case, both the View and the Model can be separated, and changes in either side do not necessarily require changes in the other, and some reusable logic can be put in a ViewModel. Let multiple views reuse the ViewModel.

In MVVM, the core is data bidirectional binding, such as Angluar dirty data detection, Vue data hijacking.

Dirty data detection

Dirty data detection occurs when the specified event is triggered. At this point, the $digest loop is called to iterate through all the observers to see if the current value is different from the previous value. If a change is detected, the $watch function is called, and the $digest loop is called again until no change is found. Loop at least twice and at most ten times.

Although dirty data detection has the problem of inefficiency, it can accomplish the task regardless of how the data is changed. However, there are problems with two-way binding in Vue. In addition, dirty data detection can realize batch detection of updated values, and then unified UI update, greatly reducing The Times of DOM operation. So inefficiency is relative, and that’s a matter of opinion.

The data was hijacked

Vue internally uses object.defineProperty () for bidirectional binding, which allows you to listen for set and get events.

var data = { name: 'yck' }
observe(data)
let name = data.name // -> get value
data.name = 'yyy' // -> change value

function observe(obj) {
  // Determine the type
  if(! obj ||typeofobj ! = ='object') {
    return
  }
  Object.keys(obj).forEach(key= > {
    defineReactive(obj, key, obj[key])
  })
}

function defineReactive(obj, key, val) {
  // Recursive subattributes
  observe(val)
  Object.defineProperty(obj, key, {
    enumerable: true.configurable: true.get: function reactiveGetter() {
      console.log('get value')
      return val
    },
    set: function reactiveSetter(newVal) {
      console.log('change value')
      val = newVal
    }
  })
}
Copy the code

This code simply implements how to listen for set and GET events on data, but it’s not enough. You also need to add published_subscribe to properties when appropriate

<div>
    {{name}}
</div>
Copy the code

When parsing the template code above, a publish subscription is added to the property name when {{name}} is encountered.

// Decouple by Dep
class Dep {
  constructor() {
    this.subs = []
  }
  addSub(sub) {
    // Sub is the Watcher instance
    this.subs.push(sub)
  }
  notify() {
    this.subs.forEach(sub= > {
      sub.update()
    })
  }
}
// The global property is used to configure the Watcher
Dep.target = null

function update(value) {
  document.querySelector('div').innerText = value
}

class Watcher {
  constructor(obj, key, cb) {
    // Point dep. target to yourself
    // Then fires the getter for the property to add the listener
    // Finally, set dep. target to empty
    Dep.target = this
    this.cb = cb
    this.obj = obj
    this.key = key
    this.value = obj[key]
    Dep.target = null
  }
  update() {
    // Get the new value
    this.value = this.obj[this.key]
    // Call the update method to update the Dom
    this.cb(this.value)
  }
}
var data = { name: 'yck' }
observe(data)
// Simulate parsing to the operation triggered by '{{name}}'
new Watcher(data, 'name', update)
// update Dom innerText
data.name = 'yyy' 
Copy the code

Next, modify the defineReactive function

function defineReactive(obj, key, val) {
  // Recursive subattributes
  observe(val)
  let dp = new Dep()
  Object.defineProperty(obj, key, {
    enumerable: true.configurable: true.get: function reactiveGetter() {
      console.log('get value')
      // Add Watcher to the subscription
      if (Dep.target) {
        dp.addSub(Dep.target)
      }
      return val
    },
    set: function reactiveSetter(newVal) {
      console.log('change value')
      val = newVal
      // Execute watcher's update method
      dp.notify()
    }
  })
}
Copy the code

This implements a simple bidirectional binding. The core idea is to manually trigger the getter for the property once to add a publish/subscribe.

Contrast Proxy with Object.defineProperty

Object.defineproperty is already two-way binding, but it has some drawbacks.

  1. You can only data hijack properties, so you need to deeply traverse the entire object
  2. You can’t listen for changes in the array

While Vue does detect changes in the array data, it is a hack and has its flaws.

const arrayProto = Array.prototype
export const arrayMethods = Object.create(arrayProto)
// hack the following functions
const methodsToPatch = [
  'push'.'pop'.'shift'.'unshift'.'splice'.'sort'.'reverse'
]
methodsToPatch.forEach(function (method) {
  // Get the native function
  const original = arrayProto[method]
  def(arrayMethods, method, function mutator (. args) {
    // Call the native function
    const result = original.apply(this, args)
    const ob = this.__ob__
    let inserted
    switch (method) {
      case 'push':
      case 'unshift':
        inserted = args
        break
      case 'splice':
        inserted = args.slice(2)
        break
    }
    if (inserted) ob.observeArray(inserted)
    // Trigger the update
    ob.dep.notify()
    return result
  })
})
Copy the code

In the next big version, Vue will use Proxy instead of Object. DefineProperty

let onWatch = (obj, setBind, getLogger) = > {
  let handler = {
    get(target, property, receiver) {
      getLogger(target, property)
      return Reflect.get(target, property, receiver);
    },
    set(target, property, value, receiver) {
      setBind(value);
      return Reflect.set(target, property, value); }};return new Proxy(obj, handler);
};

let obj = { a: 1 }
let value
let p = onWatch(obj, (v) => {
  value = v
}, (target, property) => {
  console.log(`Get '${property}'=${target[property]}`);
})
p.a = 2 // bind `value` to `2`
p.a // -> Get 'a' = 2
Copy the code

Virtual DOM

The virtual DOM covers a lot of things, as you can see in my previous articles

Routing authentication

  • The login page is separate from the other pages, and after login, Vue is instantiated and the required routes are initialized
  • Dynamic route, implemented through addRoute

Vue is different from React

  • Vue forms support bi-directional binding for easier development
  • Different ways to change data, setState uses pits
  • Props Vue is variable, React is immutable
  • If you need to update React, you can use the hook function. Vue uses dependency tracking. What has changed to render what
  • Some hook functions are executed multiple times since React 16
  • React requires JSX and requires Babel compilation. While Vue can use templates, it can also be run without compiling by writing render functions directly.
  • Ecological React is relatively good

network

TCP Three-way handshake

In TCP, the client is the end that initiates the request actively, and the server is the end of the passive connection. Both the client and server can send and receive data after a TCP connection is established, so TCP is also a full-duplex protocol.

At first, both ends are CLOSED. Both parties create a TCB before the communication begins. After creating a TCB, the server enters the LISTEN state and waits for the client to send data.

The first handshake

The client sends a connection request packet segment to the server. The packet contains its own initial data communication serial number. After the request is SENT, the client enters the SYN-Sent state. X indicates the initial serial number of data communication on the client.

Second handshake

After receiving the connection request packet segment, if the server agrees to connect, it sends a reply containing its own initial serial number. After the reply is complete, the server enters syn-received state.

The third handshake

When the client receives the connection consent reply, it also sends an acknowledgement packet to the server. The client enters the ESTABLISHED state after sending the packet segment, and the server enters the ESTABLISHED state after receiving the reply. In this case, the connection is ESTABLISHED.

PS: The third handshake can contain data via TCP Quick Open (TFO) technology. In fact, as long as the handshake protocol is involved, the TFO method can be used. The client and the server store the same cookie, and send the cookie in the next handshake to reduce RTT.

Are you ever wondering why you need a third handshake when you can establish a connection with two handshakes?

This is to prevent invalid connection request segments from being received by the server, resulting in errors.

Imagine the following scenario. The client sends connection request A, but the connection times out due to network reasons. In this case, TCP starts the timeout retransmission mechanism to send connection request B again. At this point, the request reaches the server, and the server replies and establishes the request. If connection request A finally reaches the server after both ends have closed, then the server thinks that the client needs to set up A TCP connection again, responds and enters the ESTABLISHED state. In this case, the client is in the CLOSED state, which causes the server to wait all the time, resulting in a waste of resources.

PS: During connection establishment, if either end is disconnected, TCP resends THE SYN packet. Generally, the TCP retries five times. During connection establishment, SYN FLOOD attacks may occur. In this case you can either reduce the retry count or reject the request if it cannot be processed.

TCP congestion control

Congestion processing is different from flow control, which acts on the receiver to ensure that the receiver has time to receive the data. The former works on the network to prevent excessive data congestion and excessive network load.

Congestion processing includes four algorithms: slow start, congestion avoidance, fast retransmission and fast recovery.

Slow start algorithm

The slow-start algorithm, as the name suggests, slowly and exponentially expands the send window at the start of a transmission to avoid network congestion by sending a large amount of data in the first place.

The steps of the slow start algorithm are as follows

  1. Set the Congestion Window to 1 MSS at the beginning of the connection.
  2. Multiply the window by two for every RTT passed
  3. Exponential growth is certainly not unlimited, so there is a threshold limit, and when the window size is larger than the threshold, the congestion avoidance algorithm is activated.

Congestion avoidance algorithm

The congestion avoidance algorithm is simpler. The RTT window size only increases by one after each RTT window, which can avoid the exponential growth of network congestion and slowly adjust the size to the optimal value.

If the timer times out during transmission, TCP considers that the network is congested and immediately performs the following steps:

  • Set the threshold to half of the current congestion window
  • Set the congestion window to 1 MSS
  • Start the congestion avoidance algorithm

The fast retransmission

Fast retransmission usually comes with fast recovery. If the received packets are out of order, the receiver replies only the last packet number in correct order (without Sack). If you receive three repeated ACK messages, start fast retransmission instead of waiting for the timer to expire. Specific algorithms are divided into two types:

TCP Taho is implemented as follows

  • Set the threshold to half of the current congestion window
  • Set the congestion window to 1 MSS
  • Restart slow start algorithm

TCP Reno is implemented as follows

  • Congestion window halved
  • Set the threshold to the current congestion window
  • Enter the fast recovery phase (resend packets needed by the peer and exit the phase once a new ACK reply is received)
  • Use congestion avoidance algorithms

TCP New Ren improved fast recovery

The TCP New Reno algorithm improves the previous TCP Reno algorithm. Previously, as soon as a new ACK packet was received in fast recovery, fast recovery would quit.

In TCP New Reno, the TCP sender first records the maximum sequence number of the three segments that repeat ACK.

If I have a packet with 10 sequence numbers from 1 to 10, and packets with sequence numbers 3 and 7 are lost, the maximum sequence number of this segment is 10. The sender receives only the ACK number 3. In this case, the packet with the serial number 3 is resent. The receiver succeeds in receiving the packet and sends an ACK with the serial number 7. In this case, TCP knows that the peer end does not receive multiple packets, and continues to send the packet with the serial number 7. The receiver receives the packet successfully and sends an ACK with the serial number 11. At this time, the sender considers that the segmsegmted receiver has received the packet successfully, and then exits the fast recovery phase.

HTTPS to shake hands

HTTPS still transmits information over HTTP, but the information is encrypted over the TLS protocol.

TLS

The TLS protocol is located above the transport layer and below the application layer. The first TLS transmission requires two RTTS, which can then be reduced to one RTT by the Session subclause.

Two kinds of encryption techniques are used in TLS, which are symmetric encryption and asymmetric encryption.

Symmetric encryption:

Symmetric encryption is when both sides have the same secret key, and both sides know how to encrypt and decrypt the ciphertext.

Asymmetric encryption:

There are public keys and private keys. The public key is known to all. Data can be encrypted with the public key, but data can be decrypted only with the private key, which is known only to the party that distributes the public key.

The TLS handshake process is as follows:

  1. The client sends a random value, required protocol and encryption mode
  2. After receiving a random value from the client, the server generates a random value and sends its own certificate based on the protocol and encryption mode required by the client. (If the client certificate needs to be verified, please specify.)
  3. The client receives the certificate from the server and verifies whether it is valid. If the certificate passes the authentication, a random value is generated. The random value is encrypted using the public key of the server certificate and sent to the server
  4. The server receives the encrypted random value and decrypts it with the private key to obtain the third random value. At this time, both ends have three random values. The three random values can be used to generate the key according to the previously agreed encryption method, and the following communication can be encrypted and decrypted through the key

According to the preceding steps, during the TLS handshake phase, the two ends use asymmetric encryption to communicate. However, because the loss performance of asymmetric encryption is greater than that of symmetric encryption, the two ends use symmetric encryption to communicate during the official data transmission.

In TLS 1.2, only one RTT is required for the first connection, and no RTT is required for the subsequent connection recovery.

From the input URL to the page loading process

  1. Do the DNS lookup first, and if you do the intelligent DNS lookup in this step, it will provide the fastest access IP address back
  2. This is followed by the TCP handshake, where the application layer sends data to the transport layer, where the TCP protocol specifies the port numbers at both ends, and then sends the data down to the network layer. The IP protocol at the network layer determines the IP address and indicates how to hop to the router during data transmission. The packet is then encapsulated into the data frame structure at the data link layer, and finally transmitted at the physical layer
  3. The TCP handshake is followed by a TLS handshake, and data transmission begins
  4. The data may also pass through a load-balancing server before it reaches the server, which is used to distribute the request reasonably across multiple servers, assuming the server responds with an HTML file
  5. The first thing the browser will do is say what’s the status code, if it’s 200 it will continue to parse, if it’s 400 or 500 it will give you an error, if it’s 300 it will redirect, and there will be a redirect counter, so that you don’t want to redirect it more than once, and it will also give you an error, okay
  6. The browser starts parsing the file, decompressing it if it’s in gzip format, and then knowing how to decode the file by the encoding format of the file
  7. After the file decoding is successful, the rendering process will officially begin. First, the DOM tree will be built according to THE HTML, and the CSSOM tree will be built if there is CSS. If you encounterscriptIf the label is there, it will determine if it existsasyncordeferThe former will download and execute the JS in parallel, while the latter will download the file first and wait until the HTML is parsed. If none of the above occurs, the rendering process will be blocked until the JS is executed. If you encounter a file download, you will download the file. If you use HTTP 2.0 protocol here, it will greatly improve the download efficiency of multiple images.
  8. Triggered when the original HTML has been fully loaded and parsedDOMContentLoadedThe event
  9. After the CSSOM and DOM trees are built, the Render tree is generated, which determines the layout, style, and many other aspects of the page elements
  10. In the process of generating the Render tree, the browser starts to call the GPU to draw, synthesize the layer, and display the content on the screen

Common HTTP return code

2 xx success

  • 200 OK: indicates that the request from the client was processed correctly on the server
  • 204 No content: The request is successful, but the response message does not contain the body of the entity
  • 205 Reset Content: indicates that the request succeeds, but the response message does not contain the body of the entity. Different from 204 response, however, it requires the requester to Reset the Content
  • 206 Partial Content, making scope request

3 xx redirection

  • 301 Moved, permanently redirected, indicating that resources have been allocated to new urls
  • 302 found, temporary redirection, indicating that the resource was temporarily assigned a new URL
  • 303 see other: indicates that the resource has another URL. Use the GET method to obtain the resource
  • 304 Not modified: Indicates that the server allowed access to the resource, but the request condition was not met
  • Temporary redirect is similar to 302, but the client is expected to keep the request method unchanged and send requests to new addresses

4XX Client error

  • 400 Bad Request: the request packet has syntax errors
  • 401 Unauthorized: requests are sent with HTTP authentication information
  • 403 Forbidden: indicates that the server denies access to the requested resource
  • 404 Not found: The requested resource was not found on the server

5XX Server error

  • 500 Internal sever Error: indicates that an error occurs when the server executes the request
  • 501 Not Implemented: The server does Not support a function required by the current request
  • 503 Service Unavailable: Indicates that the server is temporarily overloaded or down for maintenance and cannot process requests

Data structure algorithm

Common sort

The following two functions are general functions that will be used in sorting, so I won’t write them all

function checkArray(array) {
    if(! array || array.length <=2) return
}
function swap(array, left, right) {
    let rightValue = array[right]
    array[right] = array[left]
    array[left] = rightValue
}
Copy the code

Bubble sort

Bubble sort works by comparing the current element to the next indexed element, starting with the first element. If the current element is large, switch positions and repeat until you get to the last element, which is the largest number in the array. In the next round, repeat the operation, but the last element is already the maximum, so you don’t need to compare the last element, just to the position of length-1.

Here is the code to implement the algorithm

function bubble(array) {
  checkArray(array);
  for (let i = array.length - 1; i > 0; i--) {
    // Iterate from 0 to 'length-1'
    for (let j = 0; j < i; j++) {
      if (array[j] > array[j + 1]) swap(array, j, j + 1)}}return array;
}
Copy the code

The number of operations of this algorithm is an arithmetic sequence n + (n-1) + (n-2) + 1, and the time complexity is O(n * n) after removing the constant terms.

Insertion sort

Insertion sort works as follows. The first element is sorted by default. Take the next element and compare it with the current element, and swap positions if the current element is large. Then the first element is the current minimum, so the next fetch starts with the third element, compares forward, and repeats the previous operation.

Here is the code to implement the algorithm

function insertion(array) {
  checkArray(array);
  for (let i = 1; i < array.length; i++) {
    for (let j = i - 1; j >= 0 && array[j] > array[j + 1]; j--)
      swap(array, j, j + 1);
  }
  return array;
}
Copy the code

The number of operations of this algorithm is an arithmetic sequence n + (n-1) + (n-2) + 1, and the time complexity is O(n * n) after removing the constant terms.

Selection sort

The principle of selection sort is as follows. Traversing the array, set the index of the minimum value to 0. If the value retrieved is smaller than the current minimum value, replace the minimum index. After traversing, exchange the first element with the value on the minimum index. After that, the first element is the minimum value in the array, and the next iteration can repeat the operation starting at index 1.

Here is the code to implement the algorithm

function selection(array) {
  checkArray(array);
  for (let i = 0; i < array.length - 1; i++) {
    let minIndex = i;
    for (let j = i + 1; j < array.length; j++) {
      minIndex = array[j] < array[minIndex] ? j : minIndex;
    }
    swap(array, i, minIndex);
  }
  return array;
}
Copy the code

The number of operations of this algorithm is an arithmetic sequence n + (n-1) + (n-2) + 1, and the time complexity is O(n * n) after removing the constant terms.

Merge sort

Merge sort works as follows. Recursively separate the array until it contains at most two elements, then sort and merge the array, and finally merge into a sorted array. Suppose I have an array [3, 1, 2, 8, 9, 7, 6] and the middle index is 3. Sort the array [3, 1, 2, 8] first. On the left-hand array, continue to split until the array contains two elements (if the array is odd in length, there will be a split array containing only one element). Then sort the arrays [3, 1] and [2, 8], then sort the arrays [1, 3, 2, 8], and then sort the arrays [1, 2, 3, 8], and then sort the arrays [1, 2, 3, 8] and [6, 7, 9].

Here is the code to implement the algorithm

function sort(array) {
  checkArray(array);
  mergeSort(array, 0, array.length - 1);
  return array;
}

function mergeSort(array, left, right) {
  // If the left and right indexes are the same, there is only one number
  if (left === right) return;
  // equivalent to 'left + (right-left) / 2'
  // It is safer than '(left + right) / 2'
  // Bit operations are used because bit operations are faster than four operations
  let mid = parseInt(left + ((right - left) >> 1));
  mergeSort(array, left, mid);
  mergeSort(array, mid + 1, right);

  let help = [];
  let i = 0;
  let p1 = left;
  let p2 = mid + 1;
  while (p1 <= mid && p2 <= right) {
    help[i++] = array[p1] < array[p2] ? array[p1++] : array[p2++];
  }
  while (p1 <= mid) {
    help[i++] = array[p1++];
  }
  while (p2 <= right) {
    help[i++] = array[p2++];
  }
  for (let i = 0; i < help.length; i++) {
    array[left + i] = help[i];
  }
  return array;
}
Copy the code

The above algorithm uses the idea of recursion. The essence of recursion is to push the stack. Every time a function is recursively executed, the information about the function (such as parameters, internal variables, and the number of rows to execute) is pushed until the termination condition is met, and then the stack is unloaded and the function continues to execute. The call trajectory for the above recursive function is as follows

mergeSort(data, 0.6) // mid = 3
  mergeSort(data, 0.3) // mid = 1
    mergeSort(data, 0.1) // mid = 0
      mergeSort(data, 0.0) // When a termination occurs, go back to the previous step
    mergeSort(data, 1.1) // When a termination occurs, go back to the previous step
    P1 = 0, p2 = mid + 1 = 1
    // Fallback to 'mergeSort(data, 0, 3)' to perform the next recursion
  mergeSort(2.3) // mid = 2
    mergeSort(3.3) // When a termination occurs, go back to the previous step
  P1 = 2, p2 = mid + 1 = 3
  // Fallback to 'mergeSort(data, 0, 3)' to execute the merge logic
  P1 = 0, p2 = mid + 1 = 2
  // The rollback is complete
  // The left array is sorted, and the right is the same
Copy the code

The number of operations of this algorithm can be calculated by recursing twice, each time the amount of data is half of the array, and finally iterating the entire array once, so the expression 2T(N / 2) + T(N) (T represents the time, N represents the amount of data) is obtained. According to this expression we can apply this formula to get the time complexity of O(N * logN)

Fast row

The principle of quicksort is as follows. A random array of values is selected as the base value, and the values from left to right are compared to the base value. The smaller value is placed on the left side of the array, and the larger value is placed on the right side. When the comparison is complete, switch the position of the base value with the first value larger than the base value. We then continue the recursion by splitting the array into two parts at the base values.

Here is the code to implement the algorithm

function sort(array) {
  checkArray(array);
  quickSort(array, 0, array.length - 1);
  return array;
}

function quickSort(array, left, right) {
  if (left < right) {
    swap(array, , right)
    // Take a random value, and then swap it with the end. This is slightly less complicated than taking a fixed position
    let indexs = part(array, parseInt(Math.random() * (right - left + 1)) + left, right);
    quickSort(array, left, indexs[0]);
    quickSort(array, indexs[1] + 1, right); }}function part(array, left, right) {
  let less = left - 1;
  let more = right;
  while (left < more) {
    if (array[left] < array[right]) {
      // If the current value is smaller than the base value, 'less' and 'left' are both added by one
	   ++less;
       ++left;
    } else if (array[left] > array[right]) {
      // If the current value is larger than the base value, swap the current value with the value on the right
      // And 'left' is not changed, because the current value has not yet been sized
      swap(array, --more, left);
    } else {
      // It is the same as the reference value, but only moves the subscriptleft++; }}// Swap the base value with the first value larger than the base value
  // So the array becomes' [less than, more than, more than] '
  swap(array, right, more);
  return [less, more];
}
Copy the code

The algorithm has the same complexity as merge sort, but requires less extra space than merge sort, requires order logN, and requires less constant time than merge sort.

The interview questions

Sort Colors: this question comes from LeetCode and requires us to Sort [2,0,2,1,1,0] into [0,0,1,1,2,2]. This problem uses the idea of three-way quicksorting.

Here is the code implementation

var sortColors = function(nums) {
  let left = - 1;
  let right = nums.length;
  let i = 0;
  // If the subscript is right, the sorting is complete
  while (i < right) {
    if (nums[i] == 0) {
      swap(nums, i++, ++left);
    } else if (nums[i] == 1) {
      i++;
    } else{ swap(nums, i, --right); }}};Copy the code

Kth Largest Element in an Array: the Kth Largest Element in an Array And since we’re trying to find the KTH largest element, when we split the array, we can figure out which side we want the element to be on, and we just sort that side of the array.

Here is the code implementation

var findKthLargest = function(nums, k) {
  let l = 0
  let r = nums.length - 1
  // Obtain the index position of the KTH largest element
  k = nums.length - k
  while (l < r) {
    // Split the array to get the index of the first element larger than the base tree
    let index = part(nums, l, r)
    // Determine the size of the index and k
    if (index < k) {
      l = index + 1
    } else if (index > k) {
      r = index - 1
    } else {
      break}}return nums[k]
};
function part(array, left, right) {
  let less = left - 1;
  let more = right;
  while (left < more) {
    if (array[left] < array[right]) {
	   ++less;
       ++left;
    } else if (array[left] > array[right]) {
      swap(array, --more, left);
    } else {
      left++;
    }
  }
  swap(array, right, more);
  return more;
}
Copy the code

Heap sort

Heap sorting takes advantage of the properties of a binary heap, which is usually represented as an array, and is a complete binary tree (all leaf nodes (the lowest node) are sorted from left to right, and all other layers of nodes are full). Binary heap is divided into large root heap and small root heap.

  • A large root heap is where all the children of a node are smaller than that node
  • A small root heap is where all the children of a node are larger than it

The idea behind heap sorting is to make a big root heap or a small root heap. For example, in the small root heap, the left child index of a node is I * 2 + 1, the right is I * 2 + 2, and the parent is (I – 1) /2.

  1. First go through the number group, determine whether the node’s parent is smaller than it, if it is smaller, switch positions and continue to judge until its parent is larger than it
  2. Repeat the preceding operation 1 until the first part of the array is the maximum value
  3. Then switch the first and last positions and subtract one from the length of the array, indicating that the end of the array is the maximum and no more size comparison is needed
  4. Compare the left and right nodes which are large, then remember the index of the large node and compare the size to the parent node, and switch positions if the child node is large
  5. Repeat 3-4 until the entire array is a large root heap.

Here is the code to implement the algorithm

function heap(array) {
  checkArray(array);
  // Switch the maximum value first
  for (let i = 0; i < array.length; i++) {
    heapInsert(array, i);
  }
  let size = array.length;
  // Swap the first and last names
  swap(array, 0, --size);
  while (size > 0) {
    heapify(array, 0, size);
    swap(array, 0, --size);
  }
  return array;
}

function heapInsert(array, index) {
  // If the current node is larger than the parent node, switch
  while (array[index] > array[parseInt((index - 1) / 2)]) {
    swap(array, index, parseInt((index - 1) / 2));
    // Make the index the parent node
    index = parseInt((index - 1) / 2); }}function heapify(array, index, size) {
  let left = index * 2 + 1;
  while (left < size) {
    // Determine the size of left and right nodes
    let largest =
      left + 1 < size && array[left] < array[left + 1]? left +1 : left;
    // Determine the size of the child and parent nodes
    largest = array[index] < array[largest] ? largest : index;
    if (largest === index) break;
    swap(array, index, largest);
    index = largest;
    left = index * 2 + 1; }}Copy the code

The above code implements the small root heap. If you want to implement the large root heap, you just need to reverse the node comparison.

The complexity of this algorithm is order logN.

The system comes with sorting implementation

The internal implementation of sorting is different for each language.

For JS, array length greater than 10 will be quicksorted, otherwise use insert sort source implementation. Insertion sort is chosen because, although the time complexity is very poor, it is about the same as O(N * logN) for a small amount of data, and insertion sort takes a very small constant time, so it is faster than other sorts.

For Java, the type of the internal element is also considered. For arrays of objects, a more stable algorithm is used. Stability means that relative order cannot be changed for the same value.

Search binary trees

other

Introduce design patterns

The factory pattern

There are several types of factory patterns, which I will not cover here. Here is an example of a simple factory pattern

class Man {
  constructor(name) {
    this.name = name
  }
  alertName() {
    alert(this.name)
  }
}

class Factory {
  static create(name) {
    return new Man(name)
  }
}

Factory.create('yck').alertName()
Copy the code

Of course, the factory pattern is not just for new instances.

Imagine a scenario. Assume that there is a very complicated code requires users to call, but the user doesn’t care about these complex code, only need you to provide me with an interface to call, the user need only responsible for transfer parameters, as to how to use these parameters, the internal what logic is, you don’t care about only need you return me at last an instance. This construction process is the factory.

What factories do is hide the complexity of creating an instance and provide an interface that is simple and clear.

In the Vue source code, you can also see the use of factory patterns, such as creating asynchronous components

export function createComponent (Ctor: Class
       
         | Function | Object | void, data: ? VNodeData, context: Component, children: ? Array
        
         , tag? : string
        ) :VNode | Array<VNode> | void {
    
    // Logic processing...
  
  const vnode = new VNode(
    `vue-component-${Ctor.cid}${name ? ` -${name}` : ' '}`,
    data, undefined.undefined.undefined, context,
    { Ctor, propsData, listeners, tag, children },
    asyncFactory
  )

  return vnode
}
Copy the code

In the above code, we can see that we can create an instance of the component simply by calling createComponent and passing in the parameters. However, creating the instance is a complicated process, and the factory helps us hide the complexity of this process by making only one code call to implement the function.

The singleton pattern

The singleton pattern is very common, such as global caching, global state management, and so on. You can use the singleton pattern with only one object.

The core of the singleton pattern is to ensure that only one object is globally accessible. Since JS is a classless language, the way other languages implement singletons cannot be implemented in JS. We only need a variable to ensure that the instance is created only once. Here is an example of how to implement the singletons pattern

class Singleton {
  constructor() {}
}

Singleton.getInstance = (function() {
  let instance
  return function() {
    if(! instance) { instance =new Singleton()
    }
    return instance
  }
})()

let s1 = Singleton.getInstance()
let s2 = Singleton.getInstance()
console.log(s1 === s2) // true
Copy the code

In the Vuex source code, you can also see the use of the singleton pattern, although it is implemented in a different way, with an external variable that controls the installation of Vuex only once

let Vue // bind on install

export function install (_Vue) {
  if (Vue && _Vue === Vue) {
    // ...
    return
  }
  Vue = _Vue
  applyMixin(Vue)
}
Copy the code

Adapter mode

An adapter is used to solve the incompatibility of two interfaces. It implements normal cooperation between two interfaces by wrapping a layer without changing the existing interface.

Here is an example of how to implement the adapter pattern

class Plug {
  getName() {
    return 'Hong Kong Plug'}}class Target {
  constructor() {
    this.plug = new Plug()
  }
  getName() {
    return this.plug.getName() + 'Adapter to two pin plug'}}let target = new Target()
target.getName() // Turn the port version plug adapter into two pin plug
Copy the code

In Vue, we actually use the adapter pattern a lot. For example, when a parent component passes a timestamp property to a child component, the component internally needs to convert the timestamp to a normal date display, which is typically done with computed, using adapter mode.

Decorative pattern

The decorator mode adds functionality to an object without changing an existing interface. Just like we often need to wear a protective cover for the mobile phone to prevent falling, we do not change the mobile phone itself, but add a protective cover for the mobile phone to provide anti-falling function.

Here is an example of how to implement the decorator pattern using the decorator syntax in ES7

function readonly(target, key, descriptor) {
  descriptor.writable = false
  return descriptor
}

class Test {
  @readonly
  name = 'yck'
}

let t = new Test()

t.yck = '111' // Cannot be modified
Copy the code

In React, decorative patterns are everywhere

import { connect } from 'react-redux'
class MyComponent extends React.Component {
    // ...
}
export default connect(mapStateToProps)(MyComponent)
Copy the code

The proxy pattern

A proxy is used to control access to an object and to prevent direct external access to the object. In real life, there are many surrogate scenarios. For example, if you need to buy a foreign product, you can buy the product through daigou.

There are many proxy scenarios in real code, and without taking the framework as an example, the event proxy uses the proxy pattern.

<ul id="ul">
    <li>1</li>
    <li>2</li>
    <li>3</li>
    <li>4</li>
    <li>5</li>
</ul>
<script>
    let ul = document.querySelector('#ul')
    ul.addEventListener('click', (event) => {
        console.log(event.target);
    })
</script>
Copy the code

Because there are too many Li’s, it is impossible to bind each event. In this case, you can bind an event to the parent node, and let the parent node act as a proxy to get the real clicked node.

Publish and subscribe

The publish-subscribe pattern is also called the observer pattern. With one-to-one or one-to-many dependencies, subscribers are notified when an object changes. In real life, there are many similar scenarios. For example, I need to buy a product on a shopping website, but I find that the product is currently out of stock. At this time, I can click the stock notification button and ask the website to notify me by SMS when the product is in stock.

The publish-subscribe pattern is also common in real code, such as when we click a button to trigger a click event

<ul id="ul"></ul>
<script>
    let ul = document.querySelector('#ul')
    ul.addEventListener('click', (event) => {
        console.log(event.target);
    })
</script>
Copy the code

This pattern is also used in Vue to implement responsiveness. For objects that need to implement responsiveness, a dependency collection is performed during GET, and a dispatch update is triggered when an object’s properties change.

If you have any questions about how to implement responsiveness, you can read my previous article on how Vue responsiveness works

The last

If there are any other good topics you think you can contribute, you can also mention them in the comments

You may be wondering how I can write 25K words, in fact, many interview questions can be found in my star project, the following isThe project address

If you want to learn more front-end knowledge, interview skills or some of my personal insights, you can follow my public number to learn together

Understand dug 👉 autumn autumn recruit the campaign more job information for job searching, with a gifts | the nuggets skill in writing essay