I’ve been preparing for the interview for a while now, and I’ve noticed that I’ve been asked a lot of questions about the principles of the browser, about fundamentals/caching/rendering. In order to win the battle with the interviewer 🤨, I came up with this summary.


What is cross-domain? Why do browsers use the same Origin policy? How many ways can you solve cross-domain problems? Do you understand the precheck request?

The same origin policy of the browser causes cross-domain. The same origin policy belongs to the security mechanism of the browser. If the protocol, domain name, or port is different, the cross-domain request fails. The same origin policy is used to defend against CSRF attacks. To put it simply, a CSRF attack uses a user’s login status to initiate a malicious request. Cross-domain solution: JSONP/CORS/ proxy services

1.JSONP

The principle behind JSONP is simple: it exploits the fact that

<script src="http://domain/api? param1=a&param2=b&callback=jsonp"></script> <script> function jsonp(data) { console.log(data) } </script>Copy the code

  2.CORS

CORS requires both browser and backend support. The browser will automatically carry out CORS communication, the key to achieve CORS communication is the back end. As long as the backend implements CORS, cross-domain is achieved. To enable CORS, set access-Control-allow-Origin on the server. This attribute indicates which domain names can access resources. If a wildcard is set, all websites can access resources. Although CORS has nothing to do with the front end, it is important to know that when you solve cross-domain problems in this way, you can send simple and complex requests. For complex requests, a precheck request, which is the Option method, is first made to know whether the server allows cross-domain requests. 3. The proxy forwards the domain name requested by the front-end to the back-end domain name to hide the real server. This can be done via Webpack or NIGix.


There are several ways to implement storage. What are the advantages and disadvantages? What is a Service Worker?

Cookie, localStorage, sessionStorage, indexDB can all be stored

features cookie localStorage sessionStorage indexDB
Data life cycle Generally, it is generated by the server. You can set the expiration time Unless it’s cleaned up, it’s always there Page is closed Unless it’s cleaned up, it’s always there
Data store size 4K 5M 5M infinite

 

You need to pay attention to security when using cookies

attribute role
value If the value is used to save the user login status, the value should be encrypted and the user id cannot be in plain text
http-only Cookies cannot be accessed through JS, reducing XSS attacks
secure This parameter can be carried only in HTTPS requests
same-site The browser cannot carry cookies in cross-domain requests to reduce CSRF attacks
 
 
A Service Worker is a separate thread that runs behind the browser and can generally be implementedCaching function. To use the Service Worker, the transport protocol must beHTTPS. Because the Service Worker is involvedRequest to intercept, so it must be usedHTTPSProtocol to ensure security. There are three steps to implement the caching function:
  • You need to register the Service Worker first.
  • After listening for the install event, you can cache the required files.
  • The next time the user visits, it can query whether there is a cache by intercepting the request. If there is a cache, it can directly read the cache file, otherwise it will request the data.

How does the event trigger? Do you know what an event agent is?

 

Event triggering has three phases:

  1. The window propagates to the event trigger that is emitted when a registered capture event is encountered
  2. Event that triggers registration when propagated to event trigger
  3. Propagated from event trigger to Window, emitted when a registered bubbling event is encountered

In general, if we only want events to fire on the target, we can use stopPropagation to prevent further propagation of events. Event broker If the child nodes in a node are dynamically generated, then the child nodes need to register events on the parent node. The event broker approach saves memory because there is no need to unregister events for the child nodes, as compared to registering events directly for the target node.


Browser caching mechanism

For a data request, it can be divided into three steps: initiating a network request, back-end processing and browser response. Browser caching helps us optimize performance in the first and third steps. For example, if the cache is used directly without making a request, or if the request is made but the back-end stores the same data as the front-end, there is no need to send the data back, thus reducing the response data. Service Worker > Memory Cache > Disk Cache > Push Cache > Network requests in order of priority.

Cache policy: Strong cache and negotiated cache, and the cache policy is implemented by setting HTTP headers. Anyway, no one’s asked yet.

Actual scenarios Apply a Cache policy: For resources that change frequently, cache-control: no-cache is used to make the browser request the server each time, and ETag or Last-Modified is used to verify whether the resource is valid. This does not save the number of requests, but it can significantly reduce the size of the response data. For code files: tools are now used to package the code, so we can hash the file name and only generate a new file name when the code changes. Based on this, we can set the Cache validity period of the code file to one year cache-control: max-age=31536000, so that the latest code file will only be downloaded if the filename introduced in the HTML file changes, otherwise the Cache will always be used.

At present, I have not met the interviewer who asked about this knowledge, performance optimization is still around webpack.


How browsers render

Different browsers have different rendering engines, mainly Webkit rendering engines.

When we open a web page, the browser requests the corresponding HTML file. Although we usually write code into JS, CSS, HTML files, also known as strings, but the computer hardware does not understand these strings, so the content transmitted in the network is actually 0 and 1 bytes of data. When the browser receives these bytes of data, it converts them into a string, which is the code we wrote.

After data is converted to strings, the browser first converts the strings into tokens through lexical analysis, a process called tokenization in lexical analysis.

So what is a tag? Simply put, a tag is still a string, the smallest unit of code. This process breaks the code into chunks and labels them so that you can understand what the smallest units of code mean.

When tokenization is complete, these tags are then converted into nodes, which are then built into a DOM tree based on previous relationships between different nodes.

This is how the browser receives the HTML file from the network and converts it. The process is as follows: Byte data => String => Token => Node => DOM When parsing HTML files, browsers will also download and parse CSS and JS files. The process of parsing CSS files is very similar to that of HTML files. Finally, a CSSOM tree is generated. After we generate DOM tree and CSSOM tree, the browser will combine the two trees into a render tree, and then make the layout according to the render tree (also called backflow), and then call the GPU to draw, synthesize the layer, and display it on the screen.


In addition, there are some fragmentary knowledge points, because they have been memorized in the heart will not do the expansion, including:

  • Repaint and Reflow
  • The requestAnimationFrame API draws the animation
  • The reason for poor DOM performance