This is the 18th day of my participation in the Genwen Challenge

preface

Before reading this article, forget about library methods or libraries like $.ajax() and AXIos, go back to the original XMLHttpRequest, and then think about the newly designed FETCH API. So before you read this article, you need a few simple precursors: XMLHttpRequest(hereafter referred to as XHR) and the FETCH API.

First, let’s define two concepts. Both XHR and FETCH apis are apis that browsers expose to upper-level users (similar to how operating systems expose system apis to applications such as browsers). These two sets of exposed apis provide upper-level consumers with partial ability to manipulate HTTP packets. In other words, both are built on top of the HTTP protocol, which we can think of as a partially functional HTTP client.

XHR was introduced early, first in Outlook Web Access for Microsoft Exchange Server 2000, then in Internet Explorer 5 in 1999, and eventually in all major browsers. In other words, XHR is 21 years old (a technology that survives this long is considered classic). XHR became widely known with the advent of AJAX architecture (AJAX, more commonly known as a Web application architecture, first appeared in Jesse James Carrett’s 2005 article AJAX: A New Approach to Web Applications), and AJAX is used in all sorts of well-known Applications (such as Google’s Gmail). Regardless of who promoted whom, XHR addressed the big pain points of the day, reducing network latency or wear, improving the user experience, and enhancing JavaScript’s ability to manipulate HTTP on the browser.

In technical terms, XHR worked well for asynchronous communication between the client and server at the time. Imagine a time when the Web architecture was growing rapidly, but the functionality of Web applications was still relatively simple and more about presenting information, so XHR designers didn’t have to think too much about architectural design (if you think about it, there were no hard requirements, and overdesign was also a problem).

From a modern software engineering point of view, the entire design of XHR is very confusing, mixing request, Response, and event listeners into one object and requiring instantiation to send the request (we’ll demonstrate this in actual code later). The problem with this is that the configuration and invocation are not organized and maintainable during actual use (note that this was not a problem when web applications were simpler). In architectural terms, XHR does not conform to the principle of separation of concerns (SOC), which expects systems to be designed in a manner that separates elements from each other and keeps their responsibilities as single as possible (for example, the layered TCP/IP protocol cluster and the classic MVC architecture are classic examples of SOC principles).

Let’s look at the original use of XHR:

let xhr = new XMLHttpRequest();
xhr.open('GET', url);
xhr.responseType = 'json';
xhr.onload = function() {
  console.log(xhr.response);
}
xhr.onerror = function() {
  console.log('error');
}

xhr.send();
Copy the code

It is clear from the above code that HTTP Request, HTTP Response, and event listeners are all in the same XHR instance. The entire code organization lacks semantics and can sink into callback hell. Without the implementation of the various wrapper libraries (which is also one of the reasons why the FETCH API has been so hard to roll out since the libraries are so well packaged), writing XHR by hand is definitely a pain.

In the words of Jake Archibald, a Google Chrome fellow and personal favorite, a technologist:

XHR was an ugly baby and time has not been kind to it. It's 16 now. In a few years it'll be old enough to drink, and it's enough of a pain in the arse when it's sober

For the past two decades, the Web has been the center of the world. (You can see what this means when you look at how poorly designed JavaScript is and how important it is.) The development of Web represents the increase of user groups, which means the increase of various requirements (such as text transmission to various multimedia information transmission), various technical solutions and standards are constantly introduced into the Web system (such as the well-known WHATWG and ECMA International organizations). XHR is clearly due for a change, and changes tend to come in two broad directions:

  • It builds on the legacy, but is constrained by the legacy (often referred to as technical debt, such as how Vue 3 was originally designed to be compatible with Vue2’s design), to the fact that there is a schema release for XMLHttpRequest2 in XHR (providing the ability to manipulate binary data). There are still a few updates to the XHR Standard.
  • Redesign a set of things (specifically, the FETCH API for this article). The advantage is that there is no historical constraint, the problem is how to get the developer community to accept this new set of things (especially when the old solution still meets most of the needs).

In addition to the convenience of users, standard writers also need to consider future trends and conflicts with existing proposals. In terms of the timeline of the various related products (I refer to designs here as products in order to think in terms of products) :

  • The first COMMIT of the Fetch API appeared in 2011, and was officially standardized and implemented by browser vendors around 2015.
  • JQuery1.5 (which offers a new version of $.ajax()) came out in 2011
  • The first version of Axios appeared in 2014.

If you think about JavaScript, the biggest thing that happened in the JavaScript world at that time was the standardization of ES2015 (which meant that Promise was officially introduced as a standard, Although Promise’s concept was already well known in the programming world), each of these products uses Promise’s approach to the design of their apis. (This is another example of how asynchronous callbacks do not fit into the linear sequential thinking of programmers.)

Let’s take a look at the main considerations in the design of the FETCH API:

  • Use the latest Promise syntax structure, which is more user-friendly for upper-layer programming (but is the root of the discussion of abort later)
  • The fetch API can be used as follows:
fetch(url)
  .then((r) = > r.json())
  .then((data) = > console.log(data))
  .catch((e) = > console.log('error'))
Copy the code

The code is much cleaner organized than the original XHR, and much cleaner if async/await syntax is used.

The whole design is much more low-level, which means more flexible design separation of concerns in practice, request/Response/header separation, This also means more flexibility in using these apis (such as Request in conjunction with service workers, see the code below) :

self.addEventListener('fetch'.function (event) {
  if (event.request.url === new URL('/', location).href) {
    event.respondWith(
      new Response('

Hello!

'
, { headers: {'Content-Type'.'text/html'}}))}})Copy the code

In the above code, event.request is a request. This means that the response can be implemented directly on the client side rather than having the browser request the network, which allows for some flexibility in conjunction with the cache that XHR cannot achieve.

However, the emergence of a new technology solution is bound to cause discussion in the industry, and even controversy. It is clear that the FETCH API, despite its very advanced design philosophy, still generates a fair amount of controversy (especially when the FETCH API is difficult to implement certain XHR features), which I have divided into two categories:

  • The first category is misconceptions, and I’ll briefly describe a few of them
  • The second type is the lack of features that make it very inconvenient to implement certain user requirements

** The first misconception (from a JavaScript community member) is that “we should not add high level features to XHR, but should provide primitives with lower level”. ** Obviously, the above statement falls into the trap of “a well-designed, concise API is high level”. If you take a closer look at the spec, you’ll see that XHR is currently based on fetch, as evidenced by the spec (on XHR send()) : Fetch req. Handle the tasks queued on the networking task source per below. That said, the FETCH API is actually much lower order, which gives more flexibility to upper level developers (meaning average front-end developers).

The second misconception (and one often complained about by the community) is that the specification needs to provide something more complete, rather than a half-finished product.

To be honest, I personally think this is a “double standard” for developers. When we are developers, what we want from our products is iterative development; And when we, as consumers, use third-party technologies or standards, our response is “How dare you present me with such incomplete imperfection”, what kind of double standard is that not ok? Iterative publishing means that the spec gets feedback from actual use for improvements and can guide future designs. (If you’ve seen the CSS feature release history, you can understand what this means — specifically, the misuse of CSS prefixes.)

A third misconception is that the FETCH API does not reject certain HTTP error codes (such as 400, 500, and so on). However, I support the specification side, because fetch API is a more low-level API, no matter the error code or the correct code, it indicates that the HTTP client has received the response from the server, while network error really represents an exception.

The first type of argument tends to disappear after explanation, while the real trouble is the second type – the lack of features that facilitate the implementation of the old scheme functions (note that this refers to the convenience of implementation, not various hack implementations). Specifically, the FETCH API has the following (incomplete, but can also basically describe what this article is talking about) :

  • Request aborting is missing. Two specific requirements are addressed: how to interrupt a request? How do I interrupt a request over time? I’m listing timeout interrupts separately here because XMLHttpRequest provides a timeout attribute to handle timeout interrupts separately.
  • Progress Events, lack of ability to easily obtain request transfer progress. We know that displaying file download progress is a very friendly feature when requesting a large file asynchronously, and XHR provides the onProgress event to help us achieve this feature, as shown below:
const xhr = new XMLHttpRequest();
xhr.open('GET'.'/foo');
xhr.addEventListener('progress'.(event) = > {
    const { lengthComputable, loaded, total } = event;
    if (lengthComputable) {
        console.log(`Downloaded ${loaded} of ${total} (${(loaded / total * 100).toFixed(2)}%) `);
    } else {
        console.log(`Downloaded ${loaded}`); }}); xhr.send();Copy the code

In XHR, abort() is called directly on the XHR instance to abort a request. The onabort event is used to listen for the request to abort and respond accordingly. For timeout interrupts, the XHR instance provides the timeout attribute to facilitate implementation, as well as the onTimeout event (we won’t talk about how to write the code here, just Google it).

The FETCH API did not provide this functionality in the first place. (Here you can start by asking why such a simple API implementation is so hotly debated in the FETCH API.) The specific discussion lasted for a total of two years, which is the most hotly discussed feature of fetch API. I will first give several stages of the discussion (I will not describe the details of the debate here, but only explain the tradeoff of several schemes, please refer to the links I give for details) :

  • The original discussion whatWG/Issue 27, the purpose of this issue discussion was to provide a convenient way for upper-level developers to terminate fetch.
  • Cancelable Promise is rejected for security reasons. Details are provided in TC39 / proposal-Cancelable -promises#4. (As an aside, controversy ensued between domenic, a proposer of cancelable promises, and the Chrome group, which objected strongly, led to the proposal being scrapped.)
  • Whatwg/Issue 447 continues the discussion with a separate post (which shows the intensity of the discussion)
  • The standard whatwg/dom spec#dom-abort-controller is the result

In the initial discussion, there were two main options:

1. Use cancelable Promise scheme (proposed by Jake Archibald), that is, encapsulate a CancellablePromise specially, as follows:

var p = fetch(url).then(r= > r.json());
p.abort(); // cancels either the stream, or the request, or neither (depending on which is in progress)

var p2 = fetch(url).then(response= > {
  return new CancellablePromise(resolve= > {
    drainStream(
      response.body.pipeThrough(new StreamingDOMDecoder())
    ).then(resolve);
  }, { onCancel: _= > response.body.cancel() });
});
Copy the code

First, the first questions to consider in such a scenario are what to call abort(consistent with XHR), terminate, or cancel without causing ambiguity. It is also important to note that a new API design must consider its semantics.

Naming aside, you can see how this approach is particularly friendly to upper-level developers, and how it fits into the original XHR format.

But this scheme is strongly opposed on the other side, mainly by Getify (author of You Don’t Know JS, which I think is excellent, but easy to smuggle). His main objection is to deeper design, and let’s take a look at what his main argument is:

1. The cancelable promise design deviates from the promise design. If we design abort in the creation context of a promise, it means that the promise will lose its trustworthiness. This discussion has been going on for a long time since promises were first introduced into ES6. Think carefully about whether resolve and Reject are determined at initialization, which also shows immutability.

2, Abort! = Promise Cancelation… Abort == async Cancel. This sentence means that the cancellation of promise means the termination of fetch, which is back-pressure. Does this cause problems with some designs of async/await? When designing a new API, the specification also needs to consider whether the schema will be affected, as each schema participant is different (e.g. whether there is an impact on the Streams API).

3. It believes that the scheme enables a single promise reference to cancel the whole promise. Action at a distance(action at a distance here refers to an anti-pattern, It is generally not recommended for use in modern programming because it can cause a lot of uncontrollability. Details are in the wiki.)

At this point, different discussion parties have given their respective design considerations, but no one could convince anyone, until the first scheme was rejected because of possible security problems. But the need for abort is still there, and while the Controller + Signal scheme may seem less elegant, the shortcomings of the other schemes are less important when the first scheme has major problems. The second option, Controller + Signal, was finally decided upon (see link 3 above for more details).

By now, you can see how the simple feature introduction above relates to deeper design issues.

As I have been saying in Why the Minimum Delay of setTimeout is 4ms, “For the same demand, different participants have different thinking angles, which will lead to different schemes, and thus produce many different tradeoffs. How to think, how to weigh, is an architect’s skill.”

Although the discussion lasted too long, we finally came up with a solution (although it was still unacceptable to many people, sometimes it is better than nothing), as shown in the code below:

const controller = new AbortController();
const signal = controller.signal;

setTimeout(() = > controller.abort(), 5000);

fetch(url, { signal }).then(response= > {
  return response.text();
}).then(text= > {
  console.log(text);
});
Copy the code

The above code is combined with controller + setTimeout to implement the timeout interrupt function.

However, unfortunately, ** Progress events** has not made much progress. The first few proposals were rejected and remain stalled as of this writing (July 2020). I think the main reason is that the specification side has limited energy, and this feature is not a particularly high priority.

However, as an upper-layer application, we can still combine the Streams API with the service worker to meet these requirements. (The Streams API, by the way, is a great feature that gives front-end developers more ability to manipulate the underlying network data.) Here, I do not elaborate the implementation plan in detail, but give fetch Progress Indicators which I personally think is very good

conclusion

In the body, we describe the XHR design issues and the FETCH API design philosophy in detail, and analyze the advantages and disadvantages of such a design. However, I still need to mention that this is not the whole fetch API and cannot cover all the details in one article.

Indeed, for the upper level developers, understanding these design concepts and various schemes and tradeoff behind them is very helpful to broaden our vision and further understand the concepts and excellent practices of computer Science involved behind the design. However, we still need to rely on real-world business requirements, which tend to be more personal.

In other words, we need to consider whether to use the old XHR or the new FETCH API (which seems like crap, but is what you need) based on the actual business situation.

Let me give you a few trade-offs:

  • Compatibility is extremely important to you. Because there are still a lot of apps that need to be compatible with older browsers (as in the recent twitter “Still Need TO be COMPATIBLE with IE11” poll, around 25% of the support work still involves IE11), If you want to use the FETCH API in earlier versions, you need to polyfill (using XHR Polyfill).
  • Whether you want to use streams API and service worker together in your application. If these two features are particularly important to your application, I recommend using the FETCH API.

To answer the last question, what, besides compatibility, inhibits the FETCH API from replacing XHR?

That’s because the various excellent libraries (XHR packages) basically meet most of the functional needs of the upper layer user and don’t have any major drawbacks, so why take the risk of using the new FETCH API? Does this also reflect that you have quietly made tradeoff technology choice?

Despite the problems, the future of the Fetch API is still bright, and NPM’s Polyfill package downloads can be easily illustrated:

  • Whatwg-fetch has 8,744,612 weekly downloads
  • Axios weekly downloads were 10,041,206

It’s not that far apart, is it?