There is a description in Deno: “Aims to be browser compatible”. So what exactly does compatible browser mean here?

Let me tell you a little bit about my understanding.

First of all, the compatibility here is definitely not that Deno runs directly in the browser. Because Deno is a browser-level Runtime.

Many people also misunderstand that compatible browsers mean that Deno will provide “UMD writing like in Node.js.” First of all, we don’t mean syntactic compatibility. We don’t mean ES3 and ES5 compatibility. So don’t make the mistake of thinking you can work with Node.js and Deno via Babel.

In Deno’s Roadmap, the authors write:

Deno does not aim to be API compatible with Node in any respect. Deno will export a single flat namespace “deno” under which all core functions are defined. We leave it up to users to wrap Deno’s namespace to provide some compatibility with Node.

By compatibility, I mean compatibility with browser apis and ecologies. (Waiting to get punched in the face)

There was an issue Discussion: struct the Browser-compatible APIs #82 to discuss this issue, listing some browser APIs to be compatible with:

  • High level
    • The Console ✓
    • URL ✓
    • File/FileList/FileReader/Blob
    • XMLHttpRequest/Fetch
    • WebSocket
    • URLSearchParams
  • Middle level
    • AudioContext/AudioBuffer
    • Canvas

WebGL setup GPU support was also discussed. One of Deno’s goals, we can vaguely guess, is to make code in the browser run directly on Deno.

My point remains: Deno is not the next generation of Node.js. (Waiting to get punched in the face again)

Deno is A “secure TypeScript Runtime on V8”. A secure v8-based TypeScript runtime.

Browsers can be considered safe JavaScript runtimes, where all JavaScript code runs in a Sandbox. Although the browser is installed on your computer, the JavaScript code running in the browser can come from all over the world. In other words, the JavaScript code running in the browser is untrusted code. How to ensure that the JavaScript code of the browser does not damage your computer system is a problem that needs to be solved by the browser security mechanism. Node.js does not. Like any Web server, Node.js runs trusted code.

In contrast to the Escape Analysis security flaw in V8, which Chrome took emergency action to remove the Escape Analysis feature from the next version, Node.js is unaffected because it runs trusted code.

In this respect, Deno is positioned much like a browser.

  • A mistake by the V8 team slowed down the entire Internet

Because we can see one difference between a compatible server-side ecosystem and a compatible browser-side ecosystem:

  • Browsers run untrusted code
  • The server runs trusted code

Let’s talk about server ecology from a different Angle.

Another misconception is that many of the apis mentioned above are fully compatible with Node.js. Console, URL, XMLHttpRequest/Fetch, etc. Many of these apis are already implemented in Node.js.

When Node.js was developed, apis like File, URL, and Buffer were not available on the browser side, but because Node.js was positioned as a server-side platform, node.js references other Web servers or server programming languages. File System, for example, implements a series of posiX-compatible functions and functions.

Another thing to note is the convergence between Node.js and browsers, such as the URL API that Node.js 7 added, which is also available in all major browsers today, as Node.js also uses the WHATWG URL standard.

The WHATWG stands for Web Hypertext Application Technology Working Group. It was a fairly “heavy” organization, and when the W3C decided to abandon HTML and focus its future on XHTML 2.0, the WHATWG firmly embraced HTML and developed plans for the next generation of HTML. Eventually the WHATWG convinced the W3C to release HTML5 with them.

The Web is growing so fast today that we can thank these organizations for their efforts.

Now there is a question: if Node.js, browsers, and Deno all use these standards, will they all become the same, and will Deno replace Node.js as the next generation of Node.js? Or will Deno become a platform just like Node.js, but harder to use than Node.js, and eventually be marginalized or abandoned?

Don’t. Because performance and security are not compatible, the File/FileReader/Blob apis are inherently designed for browser-side sandbox environments (a lot of WHATWG standards are designed for browsers). Node.js doesn’t have an API yet, and isn’t going to, because what we need in a server-side environment is a file system, so Instead of embracing WHATWG, Node.js has opted for POSIX.

While we’re at it, let’s dig a little deeper and see why Node.js doesn’t use blobs and FileReader to read files.

On the browser side, files are usually from the web, provided by urls, or from forms that the user chooses. Either way, the browser reads the file and loads its contents into an in-memory buffer, at which point JavaScript can manipulate the file via Blob. Node.js does not implement these apis, but instead builds the Stream module on top of the file system. Server-side programming languages such as Java and PHP also provide Stream.

If we think of Node.js as a Web server, let’s compare it horizontally to Nginx. If you develop a static file server using JS, Nginx can easily roll Over Node.js with ten times the performance.

We can analyze the difference at the bottom. Here are a few points:

  • The user space
  • The kernel space
  • Process context
  • Interrupt context
  • DMA
  • Zero Copy

For security, the operating system does not allow user code to operate the hardware directly. In order to ensure the security of the operating system kernel, the space is divided into two parts, one is the kernel space, the other is the user space. User-written code runs in user space and can be entered into the kernel through system calls when low-level functions are needed, such as file reading.

When a user process enters kernel space from user space through a system call, the system needs to save the context of the user process and restore the context when it returns from kernel space to user space again.

For static servers, this step might be:

  • With a call to read, the file is copied to the kernel buffer
  • The read function returns that the file is copied from the kernel buffer to the user buffer
  • Write function call that copies the file from the user buffer to the kernel socket-related buffer
  • Data is copied from the socket buffer to the relevant protocol engine

You can see that the file was copied four times during the entire process.

Nginx uses SendFile to achieve Zero Copy.

The process became:

  • Sendfile system call, the file is copied to the kernel buffer
  • Copy from the kernel buffer to the socket-related buffer in the kernel
  • Copy from socket-related buffers to the protocol engine

You can see that there are only 3 copies in this process. There is no switching between user space and kernel space, and there is no need to save and restore the process context. There is still room for optimization, because buffer to buffer copy occurs once in the kernel. After Linux kernel version 2.4, the DMA module passes data directly from the kernel buffer to the protocol engine.

Although it is called Zero Copy, data is still copied from disk to memory, which is necessary from an operating system perspective. Zero Copy means that there is no redundant data in the kernel, and data does not need to be copied in the kernel. With the DMA module this process does not require CPU involvement at all.

In Nginx, only two copies of the data copy are required, whereas Node.js requires four. There are also ways to use Zero Copy with node.js, such as using OS module functionality or using C++ extensions.

On the other hand, the performance cost of node.js is not just four copies and process context saving and recovery, but also data and code crossing boundaries from C++ to JavaScript repeatedly.

Going back to the browser, there is no need for this feature. Because JavaScript code in the browser doesn’t just run in user space, it also runs in sandbox space.

So instead of guessing what compatible browsers mean here, compare the differences between browsers and servers:

  • Browsers run untrusted code, servers run trusted code
  • Browsers follow W3/WHATWG and servers follow POSIX
  • Browsers care about the PERFORMANCE of the API layer, and servers care more about the performance of the operating system layer
  • The browser capacity is limited, the server capacity is not limited

Scan the QR code to follow my public account and push original front-end content every week