This series of other translations please see JS working mechanism – small white 1991 column – nuggets (juejin. Cn) this article recommended index: 2 this article belongs to the nature of popular science, read the rise rise posture is good

This is chapter 6 of how JS works

This time we’ll take a look at how WebAssembly works and see why it performs better than JS on these metrics: load events, execution speed, garbage collection, content usage, platform apis, debugging, and multithreading and portability. The way Web applications are built is on the verge of a revolution – it’s early days, but think about how building is going to change!

What’s WebAssembly

WebAssembly (wASM) is a high-performance, low-level bytecode for Web use.

WASM lets you write programs in more programming languages (such as C, C++, Rust, and others) and compile them into WebAssembly. This allows web applications to load and execute quickly.

Loading time

The browser needs to load all.js text files in order to record JavaScript.

Browsers load WebAssembly faster because they simply pass in the wASM file already compiled by the table. Wasm is the underlying Assembly class language and is just a very compact binary format.

perform

Running Wasm is only 20% slower than native-code execution, which is amazing. This format, compiled in a sandbox environment, also adds a lot of constraints to the runtime to make sure there are no security holes, or to enforce them. It’s a bit slower than minimal native code, but it will definitely be faster in the future.

Even better, it’s browser-independent – that is, most browser engines already support WebAssembly and offer very similar execution times.

Take a look at what happened in V8:

On the left, we have some JS resources that contain JS functions. The first step is to convert all the strings into an AST. This AST is an in-memory description of the logic of your JS program. Once the AST is generated, V8 is converted directly to machine code. Walk through the tree and generate the machine code, where you get the function you compiled. There is no attempt to accelerate.

What happens next to the V8 pipeline?

This time we have TurboFan, which is an optimized compiler in V8. When your JS application is running, the code runs in V8. TurboFan listens for things that are slow, if they’re bottlenecks, and flags them as hot spots to optimize them. It pushes them in the background, which is an optimized JIT that creates faster code for these functions, but eats up most of the CPU.

It solves the problem, but these processes of analysis and optimization consume CPU resources. On mobile devices, that means higher battery consumption. However, WASM doesn’t need this — it inserts the workflow like this:

Wasm has been optimized at compile time. Therefore, transformation is not required. You take an optimized binary and inject it directly into the back end to generate machine code. All the optimization has been done by the previous compiler.

This makes wASM execution more efficient because there are only a few steps to process and you can skip them.

The memory model

If a C++ program is compiled into a WebAssembly, the program’s memory model is a contiguous block of memory with no ‘holes’. One feature of WASM that helps secure startup is that the concept of the execution stack is separate from linear memory. In C++ programs, you have the heap, and you can allocate memory from the top of the heap and then add it to the stack space. Get a pointer, then look in stack memory to manipulate the value of a variable that you shouldn’t have touched.

This creates a vulnerability for malware.

In a completely different way. The execution stack is stripped out of WebAssembly applications, so there is no way to change internal things like variables. Similarly, the function uses integer offsets in memory instead of Pointers. Function Pointers are stored in an intermediate function table. They directly calculate jumps in the module method. With this build, you can load multiple WASM modules in parallel, offset all indexes, and it works fine.

For more details on memory models and management, check out chapter 4.

GC

JS memory management is known to be handled by GC. WebAssembly is a little different. The language it supports is manual memory management, you can master your own GC in your wasm, but it’s a complex job these days, WebAssembly is designed around C++ and RUST. Because WASM is so low-level, it’s easier to compile with a programming language just one layer above it. C can use malloc, C++ can use smart Pointers, and Rust uses a completely different approach (both control transfer and so on). These languages don’t need GC, so they don’t need any complex runtimes to track stack memory. WebAssembly is a natural fit for these languages.

These languages are not 100% designed to handle complex JS transactions like DOM manipulation. It doesn’t make sense to write an entire HTML application in C++ because it wasn’t designed for it. For the most part, engineers write C++ and rust targets WebGL or highly optimized libraries (such as some heavy math). In the future, WebAssembly will support more languages without GC

Platform API

Depending on the runtime that executes JS, specific interfaces exposed by these platforms can be accessed directly through JavaScript programs. For example, if you’re running JS in a browser, you need a collection of Web APIs. This allows your application to control browser/device functionality as well as DOM, CSSOM, WebGL, IndexedDB, Web Audio apis, etc

WebAssembly modules do not have access to any platform API, and everything is mediated by JS. If you want to access some platform-specific API, you need to call it through JS.

For example, if you want to call console.log, you need JS instead of C++, which naturally has some performance costs. But that won’t be the norm, the spec will provide more platform apis so you don’t have to write JS

Source maps

When your JS code is minimized, you need a way to properly debug your code. This is where SourceMap comes in.

SourceMap is a way to map merged/minimized files to the unbuilt state. As soon as the product is built, a SourceMap is generated that contains all the information from the original file. When you query the generated JS for a certain number of rows or columns, you can look for SourceMap, which will return the original location.

WebAssembly does not currently support SourceMap because the specification is not written, but it will (probably soon)

When you hit a breakpoint in C++ code, you will see C++ code instead of WebAssembly. That’s our goal.

multithreading

JS is single threaded. Although JS uses Web Workers, they have some specified scenarios —- Basically, any CPU intensive computation will block your main UI thread, where it is good to use Web Workers. However, Web Workers do not have access to DOM WebAssembly and multithreading is not currently supported. But it is coming. Wasm will be very close to native threads (like C++ style threads). Having real threads creates a lot of new opportunities. Of course, it also opens the door to abuse.

portability

JS can now run anywhere, from the browser to the server, and even on embedded systems.

WebAssembly is designed to be secure and portable. Like the JS. It will run in any environment that supports WASM. (E.g., every browser)

WebAssembly has the same portability goals that JAVA attempted early on with Applets.

In what scenarios is WebAssembly better than JS?

In the first version of WebAssembly, the focus was on CPU-heavy scenarios. Most mainstream are games — they have tons of texture calculations. You can write programs in C++/rust, bind them to your familiar OpenGL, and compile them into wasm. Then run it in a browser.

Another scenario using WebAssembly (high performance) is to implement libraries that do a lot of CPU intensive work. For example, image manipulation

As mentioned earlier, WASM can reduce battery drain on some mobile devices because most of the processing steps are done in advance before compilation.

In the future, you’ll be able to use the WASM binary library even if you haven’t written the code to compile it. You can find some projects that are starting to use this technology at NPM.

For DOM manipulation and frequent use of platform interfaces, it makes more sense to use JavaScript because it incurs no additional performance overhead and it supports various interfaces natively.