For half a year, I haven’t published any topic in my personal article. Maybe IT’s because I was criticized a lot, or maybe I was tired. I used to share articles in the mode of learning and writing. But without further ado, LET me share what I’ve learned about HTTP that I need to know about the front end.

For HTTP message format is not much detail, because as a front-end developer, we need to know before and after the end of the alignment between the request and response when the request and return to the relationship between the head and the meaning of each field, static file resources we have observed that when the load but performance optimization point, and some daily request error how to solve, More importantly, how to calmly deal with the interviewer during the interview

The following explanation is purely for personal understanding, there are bound to be mistakes and understanding of the point, please use your ape language below spray

Simple cross-domain solution

Cross domain is a cliche topic, the interviewer ask me how to solve the cross domain, and it only before the interviewer said made from webpack proxy agent, called the back-end eldest brother gave my local rev nginx is ok, that is often in some special cases, the back-end eldest brother to the big aunt, into a new company allows you to maintain a very old project, Engineering is not used, and there is a new backend engineer who is not too high or too low. At this time, understanding the fundamental knowledge of cross-domain can solve the fundamental problems.

Ape front-end VS Ape back-end Java

The back-end said: front comrade, we first a get request to an interface, the address I gave you, http:www.pilishou.com/getname/list

Front-end operation...

fetch('http://http:www.pilishou.com/getname/list', {
    method: 'GET'
})
Copy the code

Wrote a such a request, follow the back-end eldest brother send the server, the browser to an error Failed to load http://http:www.pilishou.com: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘null’ is therefore not allowed access. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.

Small white front will say: eldest brother, you this is what interface, request return ni mama report wrong, you there what ghost!

Big bull will say: big brother, help me, you forgot to set cross-domain head there.

Small white backend will say: big brother, throw you old mother ah, you can adjust the interface, report the error also call me, I here postman above the tune of a little problem have no

The bull backend will say: wait a minute, brother, I forgot to set the cross-domain head, wait a minute

Principle explanation:

Locally to the requests of different domains, the browser makes a Origin authentication request header, if not set, under the different domain name or if requested by the local browser will send the request to the server, the server will be the client sends the corresponding value, but the browser to consider safety strategy, will be a error about the header information, at this time for the backend, You need to add ‘access-Control-allow-origin ‘: ‘*’ to the response header to tell the browser that I’m allowing you to make a cross-domain request without error and return the value to the requester so that you can get the data safely. ‘access-Control-allow-origin ‘:’ set the specified domain name here. ‘access-Control-allow-origin ‘:’ set the specified domain name here ‘

Complex cross-domain solutions

At this time the front end to sing a song douyin network red, I know I am not just like you! .

Backend said, guy, there is an interface, you need to follow resutful interface, use the PUT method, http:www.pilishou.com/getname/update

Front-end operation...

fetch('http://http:www.pilishou.com/getname/list', {
    method: 'PUT'
})
Copy the code

Continue to track the wrote a such a request, then find the browser again such a mistake Failed to load http://http:www.pilishou.com: Method PUT is not allowed by Access-Control-Allow-Methods in preflight response

Small white front end will say: big brother, you interface again how, GET, POST, PUT how not, affirmation is your problem, I did not move anything else.

The bull will say: Do me a favor and add some methods to the request header that allow cross-domain.

Small white backend will say: big brother, throw you old mother ah, you do not know how to adjust the interface, error report to me, I postman above the tune or no problem

The bull backend will say: wait a minute, brother, I will add some methods that allow cross-domain, wait a minute

Principle explanation:

In a simple cross-domain request, the request method is one of the following: HEAD GET POST 2. The HTTP header does not exceed the following fields: Accept accept-language content-language last-event-id content-type: Application/X-www-form-urlencoded, multipart/form-data, text/plain are limited to three valuesCopy the code

If the above limit is not exceeded, the backend only needs to provide a cross-domain Origin, if the request method exceeds the above three, you need to add ‘access-Control-allow-methods ‘: ‘PUT’, also for security purposes, the browser does not allow other request methods to make a cross-domain request in a method that is not set to be allowed on the platform

In the same way complex requests involve other ways in which the backend Settings allow cross-domain requests, such as those that usually occur:

  • Add custom headers
fetch('http://127.0.0.1:8887', {
    method: 'PUT',
    headers: {
      'x-header-f': '1234',}})Copy the code

Error message Failed to load http://http:www.pilishou.com: Request header field x-header-f is not allowed by Access-Control-Allow-Headers in preflight response.

The solution requires the server to add a cross-domain copy that allows those custom Headers to ask ‘access-Control-allow-headers ‘: ‘x-header-f’,

  • Add a request type that does not include the above three
fetch('http://127.0.0.1:8887', {
    method: 'PUT',
    headers: {
      'x-header-f': '1234'.'content-type': 'json'}})Copy the code

Error message Failed to load http://http:www.pilishou.com: Request header field content-type is not allowed by Access-Control-Allow-Headers in preflight response.

The solution requires the server to add a request header that allows those custom Headers to make a cross-domain replica asking ‘Access-Control-allow-headers ‘:’ Content-Type ‘

The complex cross-domain request includes the pre-request scheme

Under the condition of non homologous request, the browser will first requests for the Option of the so-called advance request, is the tentative request, the request to the server, find the interface set up corresponding request method or request header, will once again send the real request, total will send two requests to the background, respectively, with the data they want, When the OPTION request is made, the server will also return data, but the screen is closed in the browser layer. If the corresponding cross-domain setting is not detected, the corresponding error will be reported.

Reduce the authentication times of pre-requests

When the local alignment, every time send a not simple request will send a request in advance, advance request is also an operating time and resources, as a real-name certification, quality in a certain time need not real-name mass is known, principle, if the current request domain certification through the first time, is in a certain period of time does not need to be a secondary certification, But need to carry out a authentication time Control, through ‘access-Control-max-age ‘: ‘860000’, return, once within this time to send again, directly send the real request, through the pre-request Option method for a probe authentication.

Usage scenarios and performance optimization of cache-Control

Cache-control marks static resources pulled by the server with a cache flag

There are several modes that can be set for cache-Control, and typically the front-end engineer needs to know which ones

  1. Max-age = 10000 (seconds as phoneme, set according to requirements)
  2. No-cache (the server validates each request with eTAG, last-Modified)
  3. No-store (each request requires a new resource to be pulled from the server)
  4. Privite (private, not cached by proxy)
  5. Public (public, notifying the proxy cache with expired resources if the proxy cache exists in case of local invalidation)

max-age

When the resource is loaded, the browser will automatically store it in memory, but the browsing internal expiration time is controlled by the internal mechanism. When using nginx as a static resource, when refreshing, browsing will send the server whether the authentication is expired or not. In the case of the resource cache time is determined, With max-age, the browser can load the same resource file again only by pulling it from memory or disk for reuse

To achieve the above function, you need to set ‘cache-control’: ‘max-age= time (in seconds) ‘for the returned resource on the server. When the page is refreshed again, the page will be refreshed within the set time and the cache will be pulled again without clearing the cache.

no-cache

The no-cache literal means not to cache, but it is confusing, but essentially means that every time a request for a static resource is sent, an expiration certificate needs to be sent to the server. Usually, an expiration certificate requires a comparison between eTAG and last-Modified. This topic will be discussed later. If the validation is not expired, a 304 status code will be sent to inform browsing into the cache of the reusable browser

no-store

No-store: Every resource request pulls the latest resources from the resource server. Even if max-age and no-store are set at the same time, the no-store priority is the highest. In this case, max-age does not take effect and the latest resources are pulled from the server

private vs public

In some cases, the request will not be directly sent to the original resource server, but will pass through some proxy servers, such as CDN, Nginx and other proxy servers. If the public is written, all proxy servers will also cache, such as S-maxage is effective in proxy cache. If the local max-age expires, the proxy cache will pass through the proxy cache. The proxy cache does not expire and tells the browser that it is still ok to use the locally expired cache, but does not take effect on the private intermediate proxy server. An authentication is performed directly from the browser to the original server.

Caching validates last-Modified and Etag

Last-Modified

Nginx returns a last-modified time when a static resource is being processed. When the browser requests a file again, it returns a last-modified time. The corresponding if-modified-since and if-unmodified-since are sent back to the server in the request header, telling the server when you Last Modified the file, but last-modified can only be measured in seconds, and in some cases, is not precise enough

Etag

Each data has its own unique signature. Once the data is modified, another unique signature will be generated. The most typical method is to hash the content. When the browser sends a request to the server, it carries if-match or if-non-match. When the server receives the request, it compares the signature of the server with the signature sent by the browser. This makes up for the fact that last-Modified can only be measured in seconds, and in some cases, it is not precise enough

Last-modified and Etag are used with no-cache

When cache-control is in no-cache mode, the browser will cache the resource, and the server will perform an authentication expiration. Once the server returns a 304 status code, the browser cache can be reused, and the server will request data again.

Cookie policy mechanism

Cookie is something like ID authentication between a server and a user. Once the back end sets the cookie in the return header, the cookie data will appear in response and also be stored in the application/cookie of the browser. Each time a request is sent, the cookie information under the current domain name is placed in the header of the request

Set the key pair mode

‘Set-Cookie’: ‘id=1’,

Setting expiration Time

In general, cookies will be invalid if the expiration time is not set and the browser is closed. We can set the expiration time of cookies through max-age or expire

Unretrievable cookie

If httpOnly is not set, it can be read by document.cookie. In different cases, for security reasons, it can be set by httpOnly, but not by document.cookie.

Secure cookie under HTTPS

If secure is set, the field will be written into application/cookie only in the case of HTTPS service. Although cookie is sent in response, the browser will make a mistake when recognizing that it is not HTTPS service

Cookie transfer between secondary domain name and secondary domain name

Here’s an example:

All of the company’s internal systems have to go through a login system. Sso sso can also be said, if the login is under the sso.pilishou.com secondary domain name, and your own development environment is localhost:9999, when the login is successful, the cookie is set under the sso.pilishou.com domain name, Sso.pilishou.com does not carry the cookie information from the request header. It can be mapped through the host. Map 127.0.0.1 to web.pilishou.com

However, the problem is that sso and Web are both secondary domain names, and cookies under SSO cannot be obtained under Web. In this case, the solution is to set the cookie information through Dioman to the primary domain of Pilishou.com

Under the web secondary domain name, you can get the cookie information set after the successful sso request. If httpOnly is not set, you can try to use the document. Cookie to get the cookie information you want. In the fetch request, we need to set the credentials: ‘include’, which means that cross-domain cookies are allowed in the request. In this case, we can find that cookies are carried in the request header

After all these Settings, the backend also needs to match your actions during intercasting, and the backend engineers need to configure ‘access-Control-allow-credentials ‘: ‘true’ in the return header to Allow cross-domain cookies.

When setting cookies across domains, do not allow the response header to be set to Origin and set to *. Only the specified domain name can be set for a cross-domain copy. At this point, the backend engineer will also need to cooperate with the front of the * to change you specify the current web.pilishou.com.

That’s all you have to say about cookies.

HTTP long connections and various architectural approaches to performance optimization

In the old days when you didn’t have a packaging tool, or when you didn’t have a packaging tool, a big project would have a bunch of JS, a bunch of CSS, and it would cause all sorts of problems, and it would cause confusion in importing resources, slow loading of resources, and sometimes the page would be rendered, and there would be no response when you clicked on it, and when you needed to request resources from HTTP, Start with the TCP connection created after a three-way handshake.

Since the execution strategy is different for each browser, I’m only working with Chrome. Open developer tools, click On Network, and right click on Name, which has a connect ID. Chrome can create six concurrent connections at once, but six concurrent connections will block subsequent resource requests. If the former six resources file is large, the resources behind the request will be block, will be carried out in a queue waiting for the request, when the page is under the condition of the network instability, HTML, CSS, has good load, also apply colours to a drawing, JS finally wait for the request, but suddenly speed variation, users click on at the moment there is no any response, because JS simply haven’t loaded

To verify, open network resources more websites and adjust speed to 2 g mode, can discover, at first only six times the connect connection, but not all at once, because of the need to create a TCP connection through the three-way handshake, among this, also need time, when the six connection has been created, serial way again, Unless only connect connection request is completed will cede connection resources, make the next in the queue at the request of reuse, don’t need to create a new TCP, but closed from the end of the TCP, the browser and server to negotiate a closed by oneself, also can set the closing time, after how long time no request, to carry out a connection is closed, If you look up the connect ID, only 6 connect ids will appear, and the rest will be reused. If some resources are reused from other websites, a new CONNECT ID will be created

Solution:

Therefore, for the current SPA pages, resource merge is adopted to merge CSS and JS. Usually, there are four files in VUE, vendor. JS, app. JS, manifest.js and app.css

Let the browser take full advantage of the main files on a project to be downloaded in parallel over a TCP connection at one time, both in terms of performance speed and to solve the problem of user unresponsiveness, then I will briefly explain once again why to divide into these four files.

Vendor.js is generally a node_modules file, which is not easily changed, so it can be cached by the browser and can be cached for a long time. 2. Therefore, when users pull resources, they only need to pull new resources of app. App.js can also be updated by each module. 3. Manifest.js is a runtime runtime file, and no matter the changes of app.js or vendor, The MasFiste file will change, so there will be a separate update 4. App. CSS. Is a comprehensive consideration, although if such as changing a part of the small resources, but will be pulled again, but save the number of requests, for merging their own projects may be a sentimental consideration.

In fact, each file is updated not through any cache Settings, but after each JS or CSS, there will be a file hash, which is made by the packaging tool. Once the file is changed, a new hash will be generated. When the browser loads the resource, it finds that the corresponding cache file cannot be found. A re-request is made to the server.

The choice between multiple reuse and single reuse

Above us because the browser’s request and speed, the limitation of TCP connections, we adopt the above scheme, but each program for different scenarios and architecture, for background management project, the company basically is unified engineering changes, so the engineering scheme is constructed with a set of or a few, but the project basic documents are the same, Upgrades are also done on a project-specific basis, and the best approach for internal systems in a company is to forgo first-load performance and reuse multiple project caches using caches

Usually a vue project, vue.js vuex.js router.js and some common internal JS files are integrated within the project architecture

For example

Company’s internal project generally has three environment, plus you have four local debugging, if all the documents to the vendor, will produce as long as the item resend, or switch environment, in the four resources environment can’t fit into a reuse, because the domain name is different, so the browser cache cannot share, often these files in all project, All environments are non-variable files, load once, any environment, any project sharing utilizes cached resources

1. We can use CDN to make use of the files mentioned above 2. You can also use the file in a public directory under a domain name.

From Memory cache and from Disk cache

  1. from memory cacheA cache pulled from memory
  2. from disk cacheA cache pulled from disk

After the resource is pulled, the browser will cache the resource on disk and memory, and CSS files will be cached on disk, HTML. Js,img and other files will cache in memory and disk. When refreshing the page, In addition to writing cache-contorl in the return header of a particular resource: In the case of no-cache or no-store, resources will be pulled directly from the cache. The size will display from memory cache, while the CSS file will display from disk cache, but the no-cache verification is not expired. 304 is also returned for the read cache, just to the original server for a validation.

Meta http-equiv=” cache-control “Content =”no-cache” setting

At this time the front end and the back end of the students before and after the end has been tuned, sent to the test environment, let the test comrades test.

Test said: you page in a word write wrong, change it, re-send package I will test, test closed browser, brush for a while douyin

After a front-end operation… The operation was like a tiger

The front end says: Ok, you measure. I sent up, also closed the browser at this time, want to test to test silent JJ, I first brush for a moment douyin.

After testing the operation… Open your browser, type in the address, press Enter…

The test said: you exactly changed ah, how no effect.

The front end at this time also open the browser, enter the address, a look, WC what situation. Begin to doubt life… I did. It didn’t work. Then it is a fierce as the tiger opened the file to see, and again send a package

Problem summary:

Root cause, for an analysis, is caused by the cache problem, the browser on the HTML page will be an automatic cache, but the normal refresh case, if using nginx to do a static resource, will be a 304 re to the server to verify whether a resource changes. If no changes are made, a 304 cache utilization is performed

When the browser process is closed, resources cached in memory are cleared when the browser is closed, and cached from disk when the browser is opened again if not setmeta http-equiv="Cache-Control" content="no-cache"When the browser is opened again, the HTML page will be read from the browser’s disk cache for the first time, that is, the original resource must be used at this time, this is the root of the problem, so every time before adding resources from the original server to verify resources. When you open a browser, you won’t see the problem of resources not being updated in time.

Redirect Redirect pit

Redirection in response will have a location field redefined, such as the return value /list, which requires us to redirect to the page of /list, but in the response code, we can return 302 or 301

301 is suitable for permanent redirection

The common scenario of section 301 is domain name forwarding. For example, if we visit http://www.baidu.com, we jump to https://www.baidu.com. After sending the request, a 301 status code is returned, and then a location is returned, indicating the new address, which the browser will take to visit.

302 is used for temporary jumps

The difference between 302 and 301 is that if 302 is accessed again, the resource is pulled from the server again and then redirected. 301 means that if there is a cache file, the redirection position on the response head of the cache file is read directly. If the redirection position of the original server is changed, the user can only clear the cache to pull new resources for redirection. Therefore, the use of 301 needs to be careful.

Cotent Security Policy Content Security Policy

To make our website more secure

1. Limit resource access 2. Overreach in resource acquisition

You can set default-src to specify the content of resources required globally and the range of resource types

1. Connect-src Resource 2 to which we connect. Resource 3 requested in style-src style. Script-src request resources for scripts… , etc.

This can be set via the return setting ‘Content-security-policy’ in the response header

In some cases some XSS attacks use inline scrpit to inject some code, which can be set to disable. Inline scrpit can be disabled by setting ‘Content-security-policy ‘: ‘default-src HTTP: HTTPS :’. Inline script because it revoking the following Content Security Policy directive: “default-src http: https:”. Either the ‘unsafe-inline’ keyword, a hash (‘sha256-9aPvm9lN9y9aIzoIEagmHYsp/hUxgDFXV185413g/Zc=’), or a nonce (‘nonce-… ‘) is required to enable inline execution. Note also that ‘script-src’ was not explicitly set, so ‘default-src’ is used as a fallback. Error.

External connections are not allowed:

You can set ‘content-security-policy ‘: ‘default-src \self\’ set, If the reference external resources will quote Refused to load the script ‘http://static.ymm56.com/common-lib/jquery/3.1.1/jquery.min.js’ because it violates the following Content Security Policy directive: “default-srcself”. Note that ‘script-src’ was not explicitly set, so ‘default-src’ is used as a fallback. error

If you need to specify the address of the external chain, add the specified address to default-src

The rest can be set according to the Content-security-Policy ‘Content Security Policy document.