• How JavaScript Works: A comparison with WebAssembly + Why in certain cases it’s better to use it over JavaScript
  • Originally written by Alexander Zlatkov
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: stormluke
  • Proofread by: Colafornia

This is the sixth installment in a series devoted to exploring JavaScript and its building components. Along the way of identifying and describing the core elements, we also shared some rules of thumb for building SessionStack — a lightweight JavaScript application that must be powerful and high-performance to help users see and reproduce the flaws of their Web applications in real time.

  1. How JavaScript works: An overview of the engine, runtime, call stack
  2. How JavaScript works: 5 Tips for Optimizing Code in V8
  3. How does JavaScript work: Memory management + Handling four common memory leaks
  4. How JavaScript works: Event loops and the rise of asynchronous programming + 5 tips for better async/await coding
  5. How JavaScript works: Take a deep dive into WebSockets and HTTP/2 with SSE, and make the right choice between the two

This time we’ll take a look at how WebAssembly works, and more importantly how it differs from JavaScript in terms of performance: load time, execution speed, garbage collection, memory usage, platform API calls, debugging, multithreading, and portability.

The way we build Web applications is on the verge of a revolution — it’s still early days, but the way we think about Web applications is changing.

First, let’s look at the capabilities of WebAssembly

WebAssembly (also known as WASM) is an efficient and low-level bytecode for the Web.

WASM lets you write programs in languages other than JavaScript (such as C, C++, Rust, or others) and then compile them (ahead of time) into WebAssembly.

The result is very fast Web applications that load and execute.

Loading time

In order to load JavaScript, the browser must load all text.js files.

WebAssembly loads faster in a browser because you only need to transfer compiled WASM files over the Internet. Wasm is a very compact binary format of the underlying assembly language.

perform

Today, Wasm runs only 20% slower than native code execution. In any case, this is a striking result. This is a format that is compiled into a sandbox environment and runs under many constraints to ensure that it has no or very few security vulnerabilities. The speed penalty is small compared to true native code. What’s more, it will be faster in the future.

Even better, it’s browser-independent — WebAssembly support is now added to all major engines, and execution times are similar.

To understand how fast WebAssembly executes compared to JavaScript, you should first read our article on JavaScript engines.

Let’s take a look at what’s going to happen in V8:

V8’s approach: Deferred compilation

On the left, we have some JavaScript source code that contains JavaScript functions. It first needs to be parsed to convert all strings into lexical tokens and generate an abstract syntax tree (AST). The AST is an in-memory representation of the JavaScript program logic. Once this representation is generated, V8 jumps straight to machine code. The process is basically to traverse the syntax tree, generate machine code, and finally get the compiled function. There was no real attempt to speed it up.

Now, let’s look at the next phase of the V8 pipeline:

V8 assembly line design.

This time we have TurboFan, one of V8’s optimized compilers. As your JavaScript application runs, a lot of code runs in V8. TurboFan monitors certain code for slowdowns, bottlenecks and hot spots to optimize it. It pushes this code to the compiler back end — an optimized JIT that creates faster code for functions that consume most of the CPU.

It solves the above problem, but the problem here is that the process of analyzing and deciding which code to optimize also consumes CPU. That, in turn, means higher battery consumption, especially on mobile devices.

Well, WASM doesn’t need all of this — it gets inserted into the workflow, as shown below:

V8 pipeline design + WASM.

Wasm is optimized at compile time. Most importantly, parsing is no longer required. You have an optimized binary that can be hooked directly to the compiler back end that generates the machine code. All optimizations are done in front of the compiler.

This makes executing WASM more efficient because many steps in the process can be easily skipped.

The memory model

WebAssembly trusted and untrusted states.

For example, memory in C++ programs is a contiguous block with no “gaps” in it. One of the features of WASM that helps improve security is the concept of stack and linear memory separation. In a C++ program, you have a heap, you allocate heap memory from the bottom and get stack space from the top of the heap. This makes it possible to create a pointer to stack space to play with variables that shouldn’t be touched.

This is a flaw that many malware exploit.

WebAssembly takes a completely different model. The execution stack is separate from the WebAssembly program itself, so you can’t change things like stack variables. Also, integer offsets are used instead of Pointers. The function points to a table of indirect functions. These computed direct numbers then jump to functions inside the module. This design allows you to load multiple WASM modules, side by side, and pan all indexes without affecting each other.

For more information about memory models and management in JavaScript, check out our very detailed article on this topic.

The garbage collection

You already know that JavaScript memory management is handled using the garbage collector.

WebAssembly is a little different. It supports languages that manage memory manually. Your WASM module can have its own GC, but this is a complex task.

Currently, WebAssembly is designed around C++ and RUST use cases. Because WASM is very low-level, only programming languages that are one layer above assembly language are easy to compile. C can use plain Malloc, C++ can use smart Pointers, and Rust uses a completely different form (a completely different theme). These languages do not use GC, so they do not require complex runtime transactions to track memory. WebAssembly is a match made in heaven for them.

Also, these languages are not 100% designed to call complex JavaScript things like manipulating the DOM. It doesn’t make sense to write HTML applications entirely in C++, because C++ wasn’t designed for it. In most cases, when engineers write C++ or Rust, they aim for WebGL or highly optimized libraries (such as heavy math).

However, in the future WebAssembly will also support languages without GC (but with garbage collection).

Platform API calls

Depending on the runtime executing JavaScript, different platform-specific apis can be accessed directly through JavaScript applications. For example, if you’re running JavaScript in a browser, you can control the functionality of the Web browser/device through a number of Web APIs, and use things like DOM, CSSOM, WebGL, IndexedDB, Web Audio APIs, and so on

Well, a WebAssembly module cannot directly call any platform API. Everything is brokered by JavaScript. If you want to call some platform-specific API in a WebAssembly module, you must call it through JavaScript.

For example, if you want to use console.log, you have to call it through JavaScript, not your C++ code. The cost of these JavaScript calls can be high.

Not always. The specification will provide a WASM interface to the platform API in the future, and you will be able to distribute applications without JavaScript.

Source mapping

When you compress JavaScript code, you need a way to debug it properly. This is where source mapping comes in handy.

Basically, source mapping is a way of mapping consolidated/compressed files back to their pre-build state. When you build an online version, compressing and combining JavaScript files will generate a source map containing information about the original files. When you query a line number or column number in the generated JavaScript, you can look up the original location of the code in the source map.

WebAssembly does not currently support source mapping because there is no specification yet, but eventually (probably soon).

When you set breakpoints in C++ code, you will see C++ code instead of WebAssembly. At least that’s the goal.

multithreading

JavaScript runs on a single thread. There are many ways to take advantage of event loops and asynchronous programming, as shown in our article on the subject.

JavaScript also uses Web Workers, but they have a very specific use case — basically, any cpu-heavy computation that blocks the main UI thread can benefit from a Web Worker. But Web Workers cannot access the DOM.

WebAssembly does not currently support multithreading. But the future may. Wasm will be closer to native threads (such as C++ threads). Having “real” threads will create many new opportunities in browsers. Of course, this will also open the door to more abuse possibilities.

portability

Today, JavaScript can run almost anywhere, from browsers to server-side and even embedded systems.

WebAssembly is designed to be secure and portable. Just like JavaScript. It will run in every environment that supports WASM (for example, every browser).

WebAssembly has the same portability goals that Java Applets initially attempted to achieve.

Where is WebAssembly better than JavaScript?

In the first version of WebAssembly, the focus was on CPU intensive computing (such as processing mathematics). The most mainstream use that comes to mind is games — there’s a lot of pixel manipulation. You can write your application in C++ / Rust using your accustomed OpenGL binding and compile it as wasm. It runs in the browser.

Check this out (running in Firefox) – s3.amazonaws.com/mozilla-gam… . It uses unreal Engine.

Another scenario where using WebAssembly might make sense (performance-wise) is to implement something that is a CPU-intensive library. For example, some image processing libraries.

As mentioned earlier, because most of the processing steps are done ahead of time at compile time, WASM can reduce battery consumption on mobile devices (depending on the engine).

In the future, you’ll be able to use WASM binaries even if you don’t actually write the code. The project that started using this approach can be found in NPM.

For DOM manipulation and a lot of platform API manipulation, of course JavaScript is better because it doesn’t add any extra overhead and has a native API.

At SessionStack, we push the boundaries of JavaScript performance in order to write highly optimized and efficient code. Our solution needs to deliver super fast performance because we can’t hinder the customer application itself. Once you integrate SessionStack into an online Web application or website, it starts logging everything: all DOM changes, user interactions, JavaScript exceptions, stack traces, failed network requests, and debug data. All of this takes place in your online environment, but does not affect any of the experience or performance of the product. We need to optimize our code a lot and make it as asynchronous as possible.

And not just libraries! When you replay the user session in the SessionStack, we have to render all the events that occurred in the user’s browser at the time of the problem, and we have to reconstruct the entire state to allow you to jump back and forth in the session timeline. To do this, we’re making a lot of use of the asynchronous capabilities JavaScript offers, for lack of a better option.

With WebAssembly, we can hand over some of the heaviest processing and rendering to a language better suited to doing the job, while leaving data collection and DOM manipulation to JavaScript.

If you want to try SessionStack, you can start here for free. The free version offers 1000 sessions/month.

Resources:

  • https://www.youtube.com/watch?v=6v4E6oksar0
  • https://www.youtube.com/watch?v=6Y3W94_8scw

The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.