preface

The first one just got a job in January. I was deeply touched by the job application of programmers in the winter of 2020, so I decided to write this first nugget. During The 40-day period from December 2019 to January 2020, I interviewed 13 enterprises in total, including Shanghai Ant Financial, Shanghai Philoqi, outsourcing and small companies. Let’s not talk about it. Let’s talk about it.
Remarks ~ The author’s education background: Double non-undergraduate; Post of author: front end; Working years: 1 year; Time to apply: January 2020; Interview location: Shanghai

Face the

Q1. Please describe the differences between HTTP 2 and HTTP 1.0 and HTTP 1.1 according to your own understanding


HTTP1.0: The connection process between the browser and the Web server is short. Each connection only processes a request and response. After the client establishes a connection with the Web server, it can only obtain a Web resource.

HTTP1.1: Can transmit multiple HTTP requests and responses on a TCP connection, multiple request and response process can be overlapped, adding more request headers and response headers; Allows a client to obtain multiple Web resources on a single connection after establishing a connection with a Web server. The persistent connection method was added to HTTP/1.1, which features the ability to transmit multiple HTTP requests over a SINGLE TCP connection that persists as long as the browser or server is not explicitly disconnected.

Regarding HTTP2, because browsers have concurrent request limits, in the HTTP / 1.1 era, each request had to be set up and disconnected, consuming several RTT times, and loading large files took more time due to the slow start of TCP. Multiplexing was introduced in HTTP / 2.0, allowing multiple requests to use the same TCP link, greatly speeding up the loading of web pages. Header compression is also supported, further reducing the requested data size.


Q2, in addition to Etag, what cache means, which is HTTP1.0?


Expires, cache-Control, last-Modified, and Etag are four common approaches to front caches. The Expires header is primarily for HTTP

In version 1.0, it is a header in the response header that is returned to the client as a date-time indicating that the resource (the response entity) can be cached up to the specified date-time.

The cache-Control header was added after HTTP 1.1 to provide a fine-grained caching policy.

The browser sends a request to the server with if-modified-since and compares it with the last-modified time of the server file. If the time is different, it returns 200. Otherwise return 304 after the call to the browser’s local disk cache.

ETag is similar to a file fingerprint. In HTTP1.1, the version number is randomly generated by the server. The browser sends a request to the server with if-none-match and compares it with the server’s modified version ETag. Otherwise, return 304 and call the cache on the browser’s local hard drive, which makes more sense than last-Modified.


What are the common methods of detecting data? Why typeof NULL output object? How to correctly output null type?


Instanceof can correctly determine the type of the object, because the internal mechanism is to determine whether the prototype of the type can be found in the prototype chain of the object. Then there’s typeof. Typeof NULL outputs object because in the original version of JS, the 32-bit system was used. For the sake of performance, the variable type information is stored in low order. The beginning of 000 represents an object, whereas null represents all zeros, so it is wrongly identified as object. If we want to get the correct type of a variable, we can do this by 0bject.prototype.tostring.call (xx). So we can get a string like [object Type].


Q4, how to solve the problem of js compilation speed too slow?


The artifact cache is compiled and random access is provided. First, the security-related JS files are peeled off from the static service and the JS content is output by a back-end Webserver. An array of a certain length is maintained on the server. After compiling a JS file, the building tool sends the contents of the file to the Web server, and the Web server fills the order of the contents received into the array. When there is a user page, the browser requests the JS content from the Web Server, and the Web Server picks one randomly from the array and returns it to the browser. In addition to ensuring the randomness of secure JS, signature generation can be done in the Web Server. The build tool sends the compiled meta-information to the Web server when it compiles js. Signature is not generated at this time. When the user needs to request the JS file, a signature is generated in real time based on the meta information and populated in the CONTENT of the JS file. The generated signature is independent of each other and can easily identify and intercept playback requests by checking the number of times the signature is used.


Q5. As a front-end, how do you handle front-end dynamic code protection


Code security generally refers to the security of front-end JavaScript code. In general, a piece of JavaScript code is considered safe if it can only run in a normal browser, cannot or has not executed in a non-browser runtime environment to produce results, and cannot be translated into equivalent code in another programming language. Front-end code protection is mainly based on the syntax tree transformation of confusion protection, another is to build a private execution environment for the idea of virtual machine protection, Google belongs to the latter. Code mixed effect varies because of the confusion is responsible for the basic level of confusion confusion out code is also very easy to reverse, and art to the virtual machine protection effect is good, its principle is to design on the execution environment of JS to build a virtual machine, all the JS code of original business logic into the virtual machine can identify the bytecode, This complexity is good.


Q6. How to weigh page performance?


Front-end page performance has always been a topic that front-end students can’t get around. About page performance, a general and effective way to optimize performance is to set cache reasonably for resource files in the page. Typically, for a well-modularized project packaged with a sophisticated packaging tool, the Cache policy for entry HTML will be set to cache-control: no-cache, while resource files such as JS/CSS /image will be set to a longer Cache time. However, if the JS file responsible for data protection contains dynamically generated logic, the JS file can no longer use the cache. Otherwise, once the cache time is improperly controlled, various kinds of data decryption failure will occur. For the sake of front end security, the data protection logic is separated from the JavaScript code of the entire project and is either compiled inline into an HTML page or compiled into a separate JS file. Set the cache-control :no-cache response header separately for this JS file. This JS can communicate with other JS using global variables, postMessage, etc.


Q7, handwriting a common confounding method?


function foo(x) {
  x = String.fromCharCode.apply(null, x.split(' ').map(i => i.charCodeAt(0) + 23);  return btoa(x)
}function bar(y) {
  y = String.fromCharCode.apply(null, y.split(' ').reverse().map(i => i.charCodeAt(0) + 13);  return btoa(y);
}
Copy the code


Q8. How do two browser tabs communicate?


Common modes are as follows: Broadcast mode: Broadcast Channe/Service Worker/LocalStorage + StorageEvent Shared storage mode: Shared Worker/IndexedDB/cookie word-of-mouth mode: window.open + window.opener based on server: Websocket/Comet/SSE, etc.

For non-homologous pages, the non-homologous page communication can be converted to homologous pages by embedding a homologous IFrame as a “bridge”


Q9, what is process, thread; Why do browsers sometimes get stuck on a single page and eventually crash, causing all pages to crash?


While Chrome currently has a multi-process browser architecture, Chrome’s default strategy is one rendering process per TAB. However, if a new page is opened from a page that belongs to the same site as the current page, the rendering process of the parent page is reused. This default policy is officially called process-per-site-instance. Simply put, if several pages match the same site, they will be assigned to a rendering process. In this case, if one page crashes, the pages on the same site will crash at the same time, because they are using the same rendering process.


Q10, what is XSS, how to avoid XSS attack?


XSS is omitted. 1. Use the template engine 2. Avoid inlining events 3. Use CSP, input length configuration, and interface security measures to increase the difficulty of attacks and reduce the consequences of attacks 5. XSS attack strings and automatic scanning tools can be used to find potential XSS vulnerabilities


Q11. What does CSRF know and what rules do you need to consider if you are responsible for specifying an escape library for Web security?


CSRF needs to be considered from at least three aspects. CSRF automatic defense policy: Origin detection (Origin and Referer authentication); CSRF active defense measures: Token authentication or double Cookie authentication and Samesite Cookie; Ensure that pages are idempotent, and that back-end interfaces do not perform user operations in GET pages. For escape libraries, you need to consider at least HTML attributes, HTML literal content, HTML comments, jump links, inline JavaScript strings, inline CSS stylesheets, and so on


Q12. What are the macro tasks and microtasks of the browser, and what is the execution order?


Microtasks and macro tasks are bound, and each macro task creates its own microtask queue as it executes. The execution duration of a microtask affects the duration of the macro task. For example, during the execution of a macro task, 100 microtasks are generated, and the execution time of each microtask is 10 milliseconds, so the execution time of these 100 microtasks is 1000 milliseconds. In other words, the execution time of these 100 microtasks is extended by 1000 milliseconds. So when you’re writing code, it’s important to control the execution time of microtasks. In a macro task, create a macro task and a microtask for callbacks. In any case, the microtask executes before the macro task.

Microtasks include process.nextTick, Promise, object. observe, and MutationObserver. Macro tasks include Script, setTimeout, setInterval, setImmediate, I/O, and UI Rendering. 1. Execute the synchronization code, which belongs to the macro task. 2. Execute all microtasks 4. Render the UI if necessary 5. Then start the next round of Event Loop, executing the asynchronous code in the macro task


Q13, how about V8 garbage collection?


V8 realizes accurate GC, and GC algorithm adopts generational garbage collection mechanism. Exploiture GC algorithm is used to avenge an object in the Cenozoic. In the new generation space, the memory space is divided into two parts, which are From space and To space. Of these two Spaces, one must be used and the other free. The newly allocated objects are put into the From space, and when the From space is full, the new generation GC starts. The algorithm checks for surviving objects in the From space and copies them into the To space, and destroys inanimate objects. When replication is complete, the From space and To space are interchanged, and GC is finished.


Q14. In the old generation of V8GC, in what cases does the marker clearing algorithm start first?


When a space is not partitioned, objects in the space exceed certain limits, and the space cannot guarantee that the objects in the new generation will move to the old generation. And the labeling process of new and old is the same, then the new generation moves the living data to the free area, and the old generation adds the dead objects to the free list.


Q15, JS are regist-based and V8 is stack-based. Can you tell me the difference between the two?


The parser is parser and the interpreter is interpreter. Compilers and interpreters exist because machines cannot directly understand the code we write, so they need to “translate” the code we write into machine language that the machine can read before executing the program. Languages can be divided into compiled and interpreted languages according to their execution flow. JavaScriptCore has been “register-based” since the SquirrelFish version, while V8 doesn’t fit the description “stack based” or “register based.” Many sources will say that Python, Ruby, and JavaScript are “interpreted languages” that are implemented through an interpreter. This is misleading: languages typically define their abstract semantics, not impose an implementation. For example, C is generally considered a “compiled language,” but interpreters for C do exist, such as Ch. Similarly, C++ also has an interpreter version of the implementation, such as Cint. What is commonly referred to as an “interpreted language” is a language that is primarily implemented as an interpreter, but that does not mean it cannot be compiled. Scheme, for example, which is often considered an “interpreted language,” has several compiler implementations. One of the first to support much of the R6RS specification was Ikarus, which allows Scheme to be compiled on x86. Instead of generating some kind of virtual machine bytecode, it ends up generating x86 machine code directly. In fact, many interpreters are internally implemented in a “compiler + virtual machine” way, where the compiler converts the source code into AST or bytecode, and the virtual machine performs the actual execution. An “interpreted language” doesn’t mean that you don’t need to compile, just that you don’t need the user to explicitly use the compiler to get executable code.

V8 is a JavaScript engine that can directly compile JavaScript to generate machine code, without intermediate representation of intermediate bytecode. It has the concept of virtual registers inside, but that’s just a normal part of a normal Native compiler. I don’t think it should be described as “stack based” or “register based.” Internally, V8 also uses the concept of “evaluation stacks” (or “expression stacks” in V8) to simplify code generation, “abstract interpretation” during compilation, and so-called “virtual stack frames” to record the state of local variables and the evaluation stack; But when the code is actually generated, peephole optimization is done to eliminate redundant push/pop and turn many operations on the evaluation stack into operations on registers to improve code quality. The resulting code does not look like stack-based code.

As for what the interviewer said about a stack-based architecture versus a register-based architecture, who is faster? If you look at actual processors today, most of them are register-based architectures, which in turn are better than stack-based architectures.

In the case of a VM, the evaluation stack or register of the source architecture may be modeled from the memory of the actual machine, so the performance characteristics are slightly different from the actual hardware. Register-based architectures are also thought to be faster for VMS because, although zero-address instructions are more compact, more load/store instructions are required to complete operations, which means more instruction dispatches and memory accesses. Access to memory is an important bottleneck of execution speed, two or three address instructions although each instruction occupies more space, but in general can use fewer instructions to complete the operation, instruction dispatch and memory access times are less.


Q16. How do you detect memory leaks in JavaScript in normal work?


1. Generally, it is sensory long running page lag, guess there may be a memory leak. Use tools such as DynaTrace (IE) Profiles to collect data over a period of time and observe the usage of objects. Then determine if there is a memory leak. 2. Avoid memory leaks at work: Make sure that temporary variables that are not used are set to NULL. In the current ES6 popularization scenario, less use of closures is also a method.


Q17, how to understand JS is single threaded?


The whole JS code is executed in a thread, it is not like we use OC, Java and other languages, in their own execution environment can apply for multiple threads to handle some time-consuming tasks to prevent blocking the main thread. The JS code itself does not have the ability to multithread tasks. But why does JS also have multithreaded asynchrony? A powerful event-driven mechanism is the key to allowing JS to be multithreaded.


Q18. Do you think the security sandbox can prevent XSS or CSRF attacks? Why is that?


The wall that separates the render process from the operating system is the security sandbox we’re talking about. Secure sandbox does not prevent XSS or CSRF attacks. The purpose of secure sandbox is to isolate the rendering process from the operating system, so that the rendering process does not have access to the operating system. XSS or CSRF mainly uses network resources to obtain user information, which has nothing to do with the operating system.


Q19. How does instanceof determine the type of an object? Write an instanceof method


Instanceof can correctly determine the type of the object, because the internal mechanism is to determine whether the prototype of the type can be found in the prototype chain of the object.

functionInstanceof (left, right) {// Get the prototype of the typeletLeft = left.__proto__ // Determine if the type of the object is equal to the type of the prototypewhile (true) {     
        if (left === null)      
            return false     
        if (prototype === left)      
            return true     
        left = left.__proto__    
    } 
}
Copy the code


Q20, Talk about vUE data hijacking? Hand write a simple data hijacking.


In simple terms, data hijacking is the use of object.defineProperty () to hijack setter and getter operations for Object properties. Vue2 uses object.defineProperty to hijack getter and setter operations for data data. This allows the bound Template module to be dynamically updated when data is accessed or assigned. Vue adopts Proxy, Proxy can monitor array changes without pressure without various hack technologies; There’s even something more powerful than a hack — automatic length detection.

let onWatch = (obj, setBind, getLogger) => {  
    let handler = {    
        get(target, property, receiver) {      
            getLogger(target, property)      
            return Reflect.get(target, property, receiver);    
        },    
        set(target, property, value, receiver) {      
            setBind(value);      
            returnReflect.set(target, property, value); }};return new Proxy(obj, handler); 
};
let obj = { a: 1 } 
let value let p = onWatch(obj, (v) => {  
    value = v 
}, (target, property) => {  
    console.log(`Get '${property}' = ${target[property]}`); 
}) 
Copy the code


Q21. What is a Service Worker and have you ever used it?


Service Workers essentially act as a proxy server between a Web application and a browser, and can also act as a proxy between a browser and a network when the network is available. They are intended, among other things, to enable the creation of an effective offline experience, intercepting network requests and taking appropriate actions based on whether the network is available and whether updated resources reside on the server. They also allow access to push notifications and backend synchronization apis. Service workers can be used to cache files and improve the speed of the first screen


Q22, how to understand the Redux asynchronous flow middleware, what is it?


The core concept of Redux is clear: single data source, immutable data source (single) state, and pure functions that modify state. The main process is: old Store -> user-triggered action(from view) -> Reducer -> new State -> View. Redux has a global repository store that holds the state of the entire application, and the only way to modify the store is to dispatch the action by triggering it by the user. The dispatch function calls Reducer internally and returns to update our store by creating a new state (and destroying the old one). When the Store is updated, the View triggers the render function to update.

While Redux itself can only handle synchronous events, the author of Redux (@dan_Abramov) makes asynchronous flow processing available to developers by providing middleware of their choice, such as Redux-Thunk and Redux-Saga.


Q23. What is the role of ReduX-Thunk middleware and what should we pay attention to in ordinary use?


Middleware Redux-Thunk middleware, which works by determining whether an action is a function (internally returning a promise). If it is, it is called immediately. The action can stream asynchronously (still synchronous in nature) inside the function, and then continue to process the synchronized data through dispatch(action). If it is not a function, the next middleware is called through next(action) or the reducer is entered. The core idea of Redux-Thunk is to extend an action so that it becomes a function instead of an object. Redux-thunk is easy to handle asynchronously and works with ASYVC /await to synchronize actions. However, in Redux-Thunk, action becomes more complex and later maintainability decreases. At the same time, if multiple people cooperate, when their own action calls others’ action, and the other action changes, they need to take the initiative to modify.

// The simplified corefunction createThunkMiddleware(extraArgument) {
  return ({ dispatch, getState }) => next => action => {
    if (typeof action === 'function') {
      return action(dispatch, getState, extraArgument);
    }
    return next(action);
  };
}
const thunk = createThunkMiddleware();
thunk.withExtraArgument = createThunkMiddleware;

export default thunk;
Copy the code


Q24, What is reduX-Saga middleware used for?


Redux-saga is one of the middleware used to manage the asynchronous operations of Redux applications. Redux-saga can be used instead of Redux-Thunk middleware by creating sagas to collect all the asynchronous operations logic in one place for centralized processing. Redux-saga, in contrast to other asynchronous streaming middleware, puts asynchronous processing together without changing the action, action or synchronization. The asynchronous control process of Redux-Saga is also powerful. For example, the race is handled through takeLatest(). Redux-saga is a library for managing application Side effects (Side effects such as asynchronously fetching data, accessing browser ccache, etc.) with the goal of making Side effects easier to manage, more efficient to perform, easier to test, and easier to handle in case of failure. You can imagine that a Saga is like a single thread in an application that is solely responsible for handling side effects. Redux-saga is a redux middleware, meaning that this thread can be started, suspended, and cancelled from the main application with normal Redux actions, it can access the full Redux state, and it can dispatch redux actions.


Q25. How does Redux update data?


If we want to update the value of a node, we first need to generate a completely new object, such as the node in the graph. All objects on the path process the updated state, other nodes do not update the change, from top to bottom, layer by layer. The react node data cannot be modified directly. If you want to modify the data, you must first copy it, whether it is a shallow copy or a deep copy, and then include what you want to change.


Q26. Why does React Redux need immutable data?


For performance tuning. Because when a store changes, we need to notify all components that need to be updated. All changes are triggered by the action, which triggers a new state on top of the old one. The new state and the old state are two completely different objects, so when the new state and the old state are not the same object, we know that the store has changed. We don’t need to compare whether its value has changed. We just need to compare the state of the two references to see if they are the same, so we can achieve a performance optimization.

It also shows that all stores in redux are immutable data, and every node is immutable data. So when a component is bound to a node, I can just determine whether the state of the reference before and after a component is equal, so I can know if the current store has changed, and then I can decide whether to update your component, instead of having to go through the value of each is equal, I can just compare the reference. Also easy to debug and trace


Q27. How does the browser render after the user enters the browser URL and press enter?


(1) Process HTML and build the DOM tree; (2) Process CSS to build CSSOM tree; (3) Merge DOM and CSSOM into a render tree; 4. According to the rendering tree layout, calculate the position of each node; (5) By invoking GPU to draw, the layer is synthesized, and then through the rendering interface of layout and specific WebKit Ports, the rendering tree is rendered and output to the screen, which becomes the final Web page presented in front of the user.


Q28, what is shockproof and throttling, write a simple function of shockproof?


Shake stabilization: Combine events that trigger very frequently (such as keys) into one execution; Throttling: A constant number of executions per X milliseconds, interrupted. Function anti – shaking, after the end of a continuous operation, processing callback, using clearTimeout and setTimeout implementation; Function throttling is used to improve performance for more frequent events that are executed only once per period of time during a continuous operation. Function buffering focuses on firing continuously over a period of time, executing only once at the end, while function throttling focuses on executing only once over a period of time.

_.debounce = function(func, wait, immediate) {
  var timeout, args, context, timestamp, result;

  var later = function() {var last = _.now() -timestamp; // The last time the wrapped function was called was less than the set intervalwait
    if (last < wait && last > 0) {
      timeout = setTimeout(later, wait - last);
    } else{ timeout = null; // If the value is set to immediate=== =trueBecause the start boundary has already been calledif(! immediate) { result = func.apply(context, args);if(! timeout) context = args = null; }}};return function() { context = this; args = arguments; timestamp = _.now(); var callNow = immediate && ! timeout; // If the delay does not exist, reset the delayif(! timeout) timeout =setTimeout(later, wait);
    if (callNow) {
      result = func.apply(context, args);
      context = args = null;
    }

    return result;
  };
};
Copy the code


Q29: add(1)(2)(3) = 6; Add (1, 2, 3)(4) = 10?


Currying is the technique of taking a function that takes multiple arguments and turning it into a function that takes a single argument (the first argument of the original function), and returning a new function that takes the rest of the arguments and returns a result.

function add(){
    var args = [].slice.call(arguments);
    var fn = function(){
        var newArgs = args.concat([].slice.call(arguments));
        return add.apply(null,newArgs);
    } 
    fn.toString = function() {return args.reduce(function(a, b) {
            returna + b; })}returnfn ; } / / add can accept any number of parameters (1) (2, 3) / / 6 add (1) (2) (3) (4) (5) / / 15Copy the code


What are the differences between modularity in Q30, CommonJS, and ES6?


${path}/xx.js; ${path}/xx.js;

The former is a synchronous import, because for the server, the files are local, synchronous import even if the main thread jammed. The latter is asynchronous import, because for the browser, you need to download the file, if you also use import will have a big impact on the rendering;

The former is copied when exported. Even if the exported value changes, the imported value will not change. Therefore, if you want to update the value, you must import it again. However, the latter uses real-time binding, and the import and export values point to the same memory address, so the import value changes with the export value.

The latter is compiled as require/exports;


Q31. What about the MVVM model?


MVVM consists of the following three elements

View: interface;

Model: data Model;

ViewModel: serves as a bridge between View and Model;

In the time of JQuery, if you need to refresh the UI, you need to get the corresponding DOM first and then update the UI, so that the data and business logic are strongly coupled with the page. In MVVM, the UI is data-driven. When the data changes, the corresponding UI is refreshed, and when the UI changes, the corresponding data changes. In this way, the business process can only care about the flow of data, without directly dealing with the page. The ViewModel only cares about data and business processing, not how the View handles the data. In this case, both the View and the Model can be separated, and changes in either side do not necessarily require changes in the other, and some reusable logic can be put in a ViewModel. Let multiple views reuse the ViewModel. In MVVM, the core is data bidirectional binding, such as Angluar dirty data detection, Vue data hijacking.


Q32. How did you design and manage the H5 campaign page when you were responsible for the development of the H5 campaign marketing page?


When developing the H5 marketing page, you can use the engine Egret, you can use CanAVs, or you can use frame animations, tile maps, etc. Page in some complex or the supplementary role of scene, the scene must have entered the scene, quit the scene, again into the scene, quit the scene, clean up the scene, the basic function of update scenarios, most games are basically in this way, the scene, this time will need to manually create a manager to ensure the reusability of scene.

It is more conservative, pipe, UI to the UI scene to scene, scene switch management class, the UI should also need to do an LRU strategy, in this case, the UI need to clean up, according to using frequency has not to for 15 minutes at a time, for example, we can don’t think the UI resources use frequently, so we can get atlas and memory, Here only need to set a timer, each time you open the record on time, and I only for closed panel traversal, because already open the panel don’t need to clean up, also do not need to test, the UI resources in the most or atlas of memory, if can remove low frequency use of atlas can save a lot of memory, reducing the overall memory pressure, And large game UI resources account for the largest volume of the game.

We usually use the H5 game engine, the underlying default operation is as long as the UI is closed to automatically clean up, this will cause CPU heat increase, although this active clean up memory, but the user experience is decreased. So, the UI manager we’re creating here doesn’t need to manage the release of THE UI, it just needs to give a UI panel name that calls the manager on and off.

As for UI release, you can design a mechanism to automatically release UI resources that have been enabled for 15 minutes if they are not enabled again, except for the current scenario (sort of mimicking GC). The 15-minute counter here measures the number of uses of the closed panel, which is counted for every time the panel is opened. The closed panel here is worth the number of used panels. Here, we do not need to record the number of times each panel was opened before, and the number of times each panel is opened now. We just need to increase the number of times when each panel is opened, and give the total number of times for each panel. When you check for 15 minutes, if it’s at the frequency you set, we can consider it to be used frequently, if it’s below the frequency you set, we can consider it to be a low frequency panel UI, and we can clean it up after 15 minutes. SceneMenu uses the resource sound in sceneBoot, but the UI for that panel is cleaned up. When players switch to sceneBoot, they find no UI and have to reload.


Q33, what is MVP mode, MVVM and MVP relationship?


MVP doesn’t mean best of the game in LOL. View: Corresponding to the Activity, responsible for drawing the View and interacting with the user The View does not interact directly with the Model, but interacts indirectly with the Model by interacting with the Presenter. The Presenter interacts with the View through an interface. In MVVM mode, Presenter is renamed ViewModel, which is basically the same as MVP mode. The only difference is that it uses data-binding: Changes to the View are automatically reflected in the ViewModel and vice versa. This way the developer doesn’t have to deal with receiving events and View updates; the framework does that for you.


Q34, whether small programs, or VUE will often encounter template parsing, write a simple htmlParse parser?


// Convert HTML to AST objectfunctionparse(template){ var currentParent; // The current parent node var root; Var stack = []; Var startStack = []; Var endStack = []; // end the label stack //console.log(template); parseHTML(template,{ start:function start(targetName,attrs,unary,start,end,type,text){// tag name, attrs, whether to end tag,text start position,text end position,typeVar Element = {// We want the object tag:targetName, attrsList:attrs, parent:currentParent, // we need to record the parent objecttype:type,
          children:[]
        }
        if(! Root){// root = element; }if(currentParent && ! Unary){// has a parent and is not the closing tag? currentParent.children.push(element); // Insert element. Parent = currentParent; // Record the parent node}if(! Unary) {// isn't the closing tag?if(type== 1){ currentParent = element; // Instead of the end tag, the current parent will switch to the start tag it matches, and then to startStack.push(element); } stack.push(element); // Push the total stack}else{ endStack.push(element); CurrentParent = startStack[endStack.length-1].parent; } //console. Log (stack,"currentstack")
      },
      end:function end(){

      },
      chars:function chars() {}}); console.log(root,"root");
    return root;
  };

  // Regular Expressions for parsing tags and attributes
  var singleAttrIdentifier = /([^\s"' < > / =] +) /; var singleAttrAssign = /(? : =) /; var singleAttrValues = [ // attr value double quotes /"([^"] *)"+/.source,
    // attr value, single quotes
    /'([^'] *)'+/.source, // attr value, no quotes /([^\s"'=<>`]+)/.source
  ];
  var attribute = new RegExp(
    '^\\s*' + singleAttrIdentifier.source +
    '(? :\\s*(' + singleAttrAssign.source + ') ' +
    '\\s*(? : ' + singleAttrValues.join('|') + '))? '
  );
  // could use https://www.w3.org/TR/1999/REC-xml-names-19990114/#NT-QName
  // but for Vue templates we can enforce a simple charset
  var ncname = '[a-zA-Z_][\\w\\-\\.]*';
  var qnameCapture = '((? : ' + ncname + '\ \ :)? ' + ncname + ') ';
  var startTagOpen = new RegExp('^ <'+ qnameCapture); var startTagClose = /^\s*(\/?) > /; var endTag = new RegExp('^ < \ \ /' + qnameCapture + '[^ >] * >'); var doctype = /^<! DOCTYPE [^>]+>/i; var comment = /^<! - /; var conditionalComment = /^<! \ [/, / / lazy, this is a regular I get down on the vue, the late to study, the following two simple write to use, and there are some of the vue original difference of var varText = new RegExp ('{{' + ncname + '}} '); Var space = /^\s/; var checline = /^[\r\n]/; / * *type1 Common Labelstype2 codetype3 Plain text */functionparseHTML(html,options){ var stack = []; Var index = 0; Var last; // This is used for comparison. If last== HTML does not match any of these criteria, it is finishedwhileLoop var isUnaryTag =false;

    while(html){
      last = html;
      var textEnd = html.indexOf('<');
      if(textEnd === 0){var endTagMatch = html.match(endTag); / / matchif(endTagMatch){
          console.log(endTagMatch,"endTagMatch");
          isUnaryTag = true; var start = index; advance(endTagMatch[0].length); / / delete the match to match, and update the index, the work for the next matching options. The start (null, null, isUnaryTag, start, index, 1);continue; } var startMatch = parseStartTag();if(startMatch){ parseStartHandler(startMatch); Log (stack,"startMatch");
          continue; }}if(html === last){
        console.log(html,"html");
       break; }}functionadvance (n) { index += n; html = html.substring(n); } // The main purpose of handling the start tag is to generate a match containing the initial attr tagfunction parseStartTag(){
      var start = html.match(startTagOpen);
      if(start){var match = {tagName: start[1], // tagName (div) attrs: [], // attribute start: index // cursor index (initial 0)}; advance(start[0].length); var end, attr;while(! (end = html.match(startTagClose)) && (attr = html.match(attribute))) {// Find attribute advance(attr[0].length); match.attrs.push(attr); }if(end) { advance(end[0].length); Match. End = index; // The index is defined in parseHTML and is added in advancereturnMatch // Returns the start position and end position of the match object tagName attrs}}} // Processes the match twice to generate the object to be pushed on the stackfunction parseStartHandler(match){
      var _attrs = new Array(match.attrs.length);
      for(var i=0,len=_attrs.length; i<len; Var args = match. Attrs [I]; var value = args[3] || args[4] || args[5] ||' ';
        _attrs[i] = {
          name:args[1],
          value:value
        }
      }
      stack.push({tag: match.tagName,type:1, lowerCasedTag: match.tagName.toLowerCase(), attrs: _attrs}); Options. start(match. TagName, _attrs,false, match.start, match.end,1); // The match start tag is over. }}Copy the code


Q35. What is virtual DOM and what is the difference between virtual DOM and real DOM?


Virtual DOM is the same operation of DOM, but the DOM tree is abstracted into a data object. Virtual DOM enables us to operate the data object of this piece, and use the data object to present the DOM tree. The patch can get the best in each view rendering, which enables us to modify the DOM in a minimum amount. In addition, the Virtual DOM also simplifies DOM operations and makes the relationship between data and DOM more intuitive and simple.

Virtual DOM will not cause typesetting and redrawing operations;

The virtual DOM is frequently modified, and then the parts that need to be modified in the real DOM are compared and modified at one time (DIFF algorithm). Finally, typesetting and redrawing are carried out in the real DOM to reduce the loss of excessive DOM node typesetting and redrawing.

The efficiency of frequent typesetting and redrawing of real DOM is quite low;

Virtual DOM can effectively reduce the redrawing and typesetting of a large area (real DOM nodes), because the final difference with real DOM can only render the local part and use the loss calculation of virtual DOM. JQuery is a selector-oriented framework, Vue and React belong to MV* framework with clear hierarchical architecture, and virtual DOM is a node module in the framework, which is also the core module, the most important.


Q36, the standard Diff algorithm complexity is O(n^3), what the FB team did to reduce the time complexity to linear?


(1) Two identical components produce similar DOM structures, while different components produce different DOM structures;

②. For a group of child nodes of the same level, they can be distinguished by unique ID. Algorithm optimization is the basis of React’s entire interface Render. Facts also prove that these two assumptions are reasonable and accurate, ensuring the performance of the overall interface construction.


Q37. How to avoid reflux and redraw?


1. Reduce backflow and avoid frequent changes; 2. Avoid changing styles one by one and use class names to merge styles. 3. Common optimization operations are as follows:

Use translate instead of top

Use visibility instead of display: None, as the former will only cause a redraw and the latter will cause backflow (layout changes)

Take the DOM offline and change it. For example, give it display: None (once Reflow) and then change it 100 times before displaying it

Do not put property values of DOM nodes in a loop as variables in the loop

Do not use a table layout. A small change may cause the entire table to be rearranged

Select the speed of the animation implementation, the faster the animation speed, the more times of backflow, you can also choose to use requestAnimationFrame

CSS selectors match search from right to left to avoid deep DOM

Turns a frequently running animation into a layer that prevents the node from backflow and affecting other elements. For example, in the case of the video TAB, the browser automatically turns the node into a layer.

Avoid reading properties that cause reflow/redraw too often, and if you do need to use them more than once, cache them with a single variable.

Use absolute positioning for elements with complex animations to take them out of the document stream, otherwise causing frequent backflow of the parent and subsequent elements.


Q38, you like building wheels so much, how to expand and design jQ?


First, jQ is selectors oriented.

class jQuery {
	constructor() {
		const result = document.querySelectorAll(selector) const length = result.length
		for (let i = 0; i < length; i++) {
			this[i] = result[i]
		}
		this.length = length this.selector = selector
	}
	get(index) {
		return this[index]
	}
	each(fn) {
		for (let i = 0; i < this.length; i++) {
			const elem = this[i] fn(elem)
		}
	}
	on(type, fn) {
		return this.each(elem => {
			elem.addEventListener(type, fn, false)
		})
	}
	style(data) {}
}
const $p = new jQuery('p') $p.get(1) $p.each((elem) => console.log(elem.nodeName))
Copy the code


Q39: React Context: childContectType, createContext


The reason for deprecation is that components that do not need the data will also receive the data and be rerendered


Q40. In WebApp or mixed development, what can you do to optimize performance and experience?


(1) WebView initialization is slow, can be initialized at the same time to request data, so that the back-end and network do not idle.

(2) The back-end processing is slow, so the server can be output by trunk, and static network resources can be loaded to the front end while computing at the back end. (3) The script execution is slow, so let the script run last without blocking page parsing.

(4) At the same time, reasonable preloading, precaching can make the bottleneck of loading speed smaller.

⑤WebView initialization is slow, at any time to initialize a good WebView for use.

⑥DNS and links are slow. Try to reuse domain names and links used by clients.

⑦ Script execution is slow, you can split the framework code out, before the request page is executed.


Q41. What are the controlled and uncontrolled components in React?


One of the core components of React is an autonomous component that can maintain internal state, but when we introduced native HTMI. Form element

(Input, select, textarea, etc.) should we host all data in the React component or keep it in the DOM element? The answer to this problem is the definition of a split between controlled and uncontrolled components.

Controlled Components are those Controlled by React and all form data is stored uniformly. For example, in the following code, the value of the username variable is not stored in the DOM element, but in the component state data. Any time we need to change the value of the username variable, we should call the setState function to do so.

An uncontrolled LED Couponent stores form data in the DOM, not in the React component. We can use refs to manipulate DOM elements:

But we do not advocate the use of controlled components in the actual development, because of the consideration of actual situation, we need more open or close form validation, selective function such as button clicks, mandatory input format support, and at this point we will helps us to better data managed to React with declarative way to complete these functions. The original reason for introducing React or any other MWW framework was to free us from the heavy direct manipulation of the DOM.


Q42, What is the role of key in React?


Keys is a secondary marker that React uses to keep track of which list elements have been modified, added, or removed.

During development, we need to ensure that the key of an element is unique among its peers. In the React Diff algorithm, React will use the Key value of the element to determine whether the element is newly created or moved, so as to reduce unnecessary element rerendering. React also relies on Key values to determine how elements are related to their local state, so we can’t ignore the importance of keys in transition functions.


Q43. Look at your flutter hybrid application. What about the communication between flutter and native code?


For a network request, you can define a MethodChannel object in Dart and implement a MethodChannel with the same name on the Java side. After registering on the Flutter page, you can call the corresponding Java implementation by calling the post method.


Q44, in small program, APP, for WebView hijacked by operators, injection problem, how do you deal with?


Use CSP to block non-whitelisted resources in the page, use HTTPS, have the App convert it into a Socket request, and proxy WebView access, the embedded WebView should limit the domain name of the WebView allowed to open, and set the whitelist to run access. Or give the user a strong and obvious prompt before opening an external link.


Q45. Now there is a business scenario, in the mobile advertising industry, I want to achieve a little person from the top to the bottom, walk the S curve, or the user’s finger across the route, how do you do that, how do you ensure that it does not go sideways?


Don’t


Q46, now there is a scene, in the Taobao APP advertising page, there is an art word, the word “fu”, the user needs to hand trace it, the similarity is very high (80%) under the situation to open the red envelope, with the front end method you will use what way to achieve, why calculate the coverage rate, why calculate the designated path point, what is the difference, usually done?


The answer is coverage, not implementation.


Q47. How would you optimize for a lot of Echart data, or a lot of other ICONS?


Map of tiles


Q48. If you were to write a hybrid app, what would you do with the technical selection, why flutter or Uniapp, what were the reasons, what was the basis for your choice?


I’m going to say Flutter for the reasons below.


Q49. Is the Virtual DOM really faster than the native DOM?


Not necessarily, it’s a performance vs. maintainability trade-off. The point of a framework is to mask the underlying DOM operations for you, allowing you to describe your purpose in a more declarative way, thereby making your code easier to maintain. No framework can optimize DOM operations faster than purely manual operations, because the FRAMEWORK’s DOM operations layer needs to deal with any operations that the upper-layer API may generate, and its implementation must be universal. The framework gives you assurance that you can still give you decent performance without having to manually optimize.


Q50. Have SameSite cookies ever been used?


The SameSite attribute is used to protect the cookie from CSRF vulnerabilities in addition to the usual path, Domain, expire, HttpOnly, Secure attributes. The introduction of SameSite attributes on cookies provides three different ways to control CSRF vulnerabilities. We can choose not to specify the attribute, or we can use Strict or Lax to restrict cookies to same-site requests. 1. Set SameSite = Strict, it means that your cookie is only sent and used in the first-party website; 2. Set SameSite = Lax to send a cookie when the user sends a synchronous request for the GET method; 3. When SameSite = None, this means you can send cookies in third-party environments.

80% of the above interview questions are mainly from Ant Financial.

The front direction

This chapter is mainly to express the author personal to the front end of the direction of the shallow point of view, last year saw the D2 forum video, very much agree with uncle Wolf’s words, at this stage between the framework of the dispute has long been over, now is more important to “improve the effect”. I work for a short time, the contact of the framework and library are not a lot, the following is a simple summary of my cross-end. It is also my personal answer to question 48 above.

A review of existing cross-end solutions can be divided into three categories: Web containers, pan-Web containers, and self-drawing engine frameworks.

The web-based container, or browser-based cross-platform, is also getting better and better, and the natural pipeline is getting shorter, complementing the performance of native technologies. Game engines like Egret, Cocos, Laya, which are written in Typescript and cross platform, easily run HTML5 games across browsers on iOS and Android, and provide a nearly consistent user experience across browsers. Egret, for example, also provides an efficient jS-C Binding compilation mechanism for compiling games into native formats, but most HTML game engines (such as Egret and Laya) fall under the web container umbrella as well. The Web container framework also has an obvious fatal (in the case of high experience & performance) disadvantage, which is the WebView rendering efficiency and poor JavaScript execution performance. Add to that the customization of various Android versions and device vendors, and it’s hard to guarantee a consistent experience on all the devices you’re on. The pan-Web container framework, such as ReactNative and Weex, means that the upper layer adopts front-end friendly UI, and the lower layer adopts native rendering form. Although the SAME HTML+ JS-like UI construction logic is used, the corresponding custom native controls will be generated eventually. To take full advantage of the high rendering efficiency of native controls relative to WebView, and H5 and native complement each other to achieve better user experience, which is also a good solution. As system versions change and apis change, developers may also have to deal with platform differences, and some features may only be available on some platforms, which can compromise the cross-platform nature of the framework.

The custom engine framework here refers specifically to the Flutter framework, which takes on cross-end tasks and rendering modes from the bottom up. In terms of usage, it is a bit taxing to write styles, nesting warnings (the root cause is my own).

2020 winter

On this topic, I have some feelings from my own perspective. There is a big guy on the Internet who said that 2019 was the worst year of the last decade and the best year of the next decade. In fact, since 2019, the Internet has been a winter of the statement, the Internet began to lay off, as well as some large factory employees their own interests damaged news. Judging from my interviews in 2019-2020, 2020 is not as bad as before, but it is far from as bad as the winter that has been rumored online.

I think, the present employment environment is not cold, the Internet is just a more rational, investors are no longer get a PPT expression can get the money, the interviewer is no longer back to try to find work, on the back of the severe employment environment, remind us more should pay attention to the cultivation of their ability, rather than a fool. Pay more attention to how their theory knowledge can bring more value for the company, product, has the interviewer asked me, “do you think the company hire you what to do, for you is for you to solve the problem”, it also requires not only should have enough hard power, soft power from the side, not only to get things done, more should be up and down the management.

In most cases, in the interview process, the interviewer or the company is the leading party and the pace is to follow the interviewer. However, I think the interview is actually a process of mutual discussion. Not only the company is choosing you, but you are also choosing the company, colleagues and leaders to work in the future. Although I failed in the process of the interview the ant gold clothing, but style is my personal interview, I’ll just will say, don’t I would say no, I never tried prevarication, trying to muddle through (in some people seem a little silly), but I am who I am, I will be, I can do, at most is to ask the interviewer and discussion, Try to tell the interviewer what I know. So I in this interview, more is to explore, and the interviewer will ask the interviewer questions, not only is the last said the interviewer can ask him a few questions, some will be asked during the interview process, a good interviewer will discuss with you, and discuss out a reasonable plan or correct answers, the whole process is very cheerful, for example, ali of the interviewer, The interviewer opened up My Github and asked me one project at a time and encouraged me to work on it.

The winter interview, record of formal schooling, power are limited, as a direct result of no entry BAT TMD, but the biggest harvest is to communicate with more than 10 + the interviewer, more let me recognize myself again, to recognize their strengths, weaknesses and future planning, and the opinion of the person to do things, here sincerely thanks for 10 more than the interviewer.

In the end, WHETHER I went to a big company, a small company, or an outsourcing company, I can’t tell you, BUT I was just an ordinary novice front-end. As for some students’ advice on whether to apply for a job in a large company or a small company, my attitude is that, when the strength allows, I suggest to take a comprehensive consideration based on the purpose of “work is for a better life”, because life is a gamble that cannot be restarted.

Record of formal schooling

Beggars may not be jealous of millionaires, but they may be jealous of beggars with higher incomes. Without a higher vision, you’ll struggle with your current circle; When you have a higher vision, you will see the people and things around you less.
Is the degree important? My answer is:
importantI have suffered from my academic qualification more than once or twice. I was rejected in the face of *HR, future *HR and ten thousand *HR. One HR even blunted that I could not work with those 985 colleagues and asked me some questions about how to deal with urgent projects. I can only say that the college entrance examination, the rest of their own slowly repent well. In the circles of the Internet, the core is our own technology, but how many people go against technology can achieve or very cow force, if you are such a person, what all needless to say, but I think the vast majority of programmers are ordinary people, ordinary, (double than the underlying more like me), don’t say anything for watch degree three years later, Degree is a lifetime thing, those who can crush you in the college entrance examination, out of the society as long as they are willing, still can crush you, you work hard, you try hard, others will not sleep for you, so, to improve their degree, or try to improve.

The last

I am just an ordinary ordinary small front end, I hope that my face and experience to everyone for a job have some help. It takes a lot of patience and effort to live a beautiful life without complaining or explaining. Programmers are even more so, code born.