The paper contains 4528 words and is expected to last 13 minutes

Source: Pexels


Ideal is full, reality is very skinny.


In the world of science, there is no truer saying.


I recently read a couple of articles with fantastic, even ludicrous, but well-sourced ideas that, upon close investigation, turned out to be supported by anecdotes from conference speeches.


Not all views are misplaced, just a lack of evidence and justification. May get the right perspective one day, but not today.


With the New Year just around the corner, it’s a good time to look ahead, so we’ve dissected five common myths that have emerged over the past year.


Myth 1: The Vanishing frame


Source: Pexels


I like that.


It’s the best marketing term for front-end web pages since Virtual DOM, and it’s helping Svelte get to market quickly. That’s an exaggeration. There’s always a running time. The running time can be small, but something has to trigger the change. And all libraries come with pure JS, HTML, and CSS. The real hero is Tree Shaking, which is the process of statically analyzing imported statements to eliminate invalid code. Basically, non-imported code does not need to be included in the final bundle. With the compile process, all you need to do is look up identifiers and add import statements. Tree Shaking is the best way. It’s not hard to describe the process. The compiler encounters specific helpers such as #each and decides to include list-mapping code. Don’t trust me to look for import statements in compiled Svelte code. The import statement is right there.


By compiling, you can now reduce the need for code to bind runtime because the template can be hard-coded, effectively unwinding the loop. However, once common patterns are encountered, the size of the abstraction is actually more efficient, and the runtime is small before the developer knows it. Now, any sufficiently simple library used in conjunction with Tree Shaking will yield the same results, but it may not be as smooth. Reactive libraries are usually smaller, but some Virtual DOM libraries can even produce equivalent or smaller bundles. Maybe you should consider focusing on making valuable things disappear.


Myth # 2: Scheduling means better performance


It’s not new, but it will continue to be.


React’s Concurrent Mode is just the latest trend in making almost meaningless animated presentations. Why do most of them make no sense?


Meaningful performance is limited by the upper frame rate limit. Almost all demos use requestAnimationFrame, otherwise unnecessary work is required. So often the only way to deal with this is to come up with something ridiculous. However, this means testing libraries only in extreme cases — when libraries are limited in the worst possible way. There is some truth to this. Normal degradation can be important for low-power devices in resource-limited situations. But how do libraries break free? Find ways to reduce your workload. This is no longer an exercise in improving performance, but how to reduce the workload to provide a better user experience.


It’s a very noble goal, but with some interesting side effects. Detection and scheduling costs. Obviously less than DOM needs. However, the heavier the render in the library, the earlier intervention is required. It becomes a self-fulfilling prophecy. All libraries can use more scheduling techniques such as requestAnimationFrame.


Therefore, although blocking the main thread is not conducive to interactivity, it is difficult to generalize the optimal point of intersection between performance and blocking. You only notice this when you go beyond what your hardware can support, so what’s the most appropriate way to downgrade? The author tested the visual response of different scheduling algorithms in the office by SierpinskiTriangle Demo, and the results were quite inconsistent. There is clearly no better option. While using the Suspense toggle TAB for lazy loading status, the visuals are consistent.


The common techniques used today are based on the concept of deadlines and queues. A little-known feature of requestAnimationFrame and requestIdleCallback is that you can use the callback information provided to determine how much time is needed. Work can be managed according to available time, allowing users to be more intelligent with scheduling. Unfortunately, there is no silver bullet, and sometimes faster is better.


I hope this debunks some common myths about the front end in 2019. I’m not trying to be a naysayer, because I’ve learned that a lot of this comes from looking into the future. Although I personally believe that these surround technologies have a lot of potential, I caution myself to stay one step ahead, as this enthusiasm, especially if not based on concrete reality, can lead to unfounded assumptions that Virtual DOM is faster than the actual DOM in 2014.

Yes, not much faster…


Myth 3: Web Assemblies are faster for Web UIs


One constant that never breaks is JavaScript, and never underestimate it. But in my opinion: Never underestimate the cost of DOM. DOM is very expensive. By now, most people know the cost of using the DOM because it leads to reordering and redrawing. Furthermore, even reading properties that affect the layout can cause premature backflow. However, using the other attributes of the DOM also carries significant costs. Tree traversal of any kind is almost as expensive as creating DOM nodes. Any operation using the DOM incurs additional costs.


Unfortunately, for other technologies, the cost is even higher.


Web Workers can be a lifesaver, but they don’t have access to the DOM, so despite many performance benefits, they don’t have a meaningful impact on the Web UI. WASM has similar limitations. WASM is much faster than JavaScript when you stay in WASM, but it slows down the more you need to transition to the JavaScript API.


Currently, this is the only way to access the DOM. This will eventually change with the introduction of Web binding. This is some of the work done in recent years, and when the specification is finally released, you can see some big benefits. As of today, not only the ordinary pure js when performing the DOM render faster, and some of the data driven senior library is better than the fastest low-level WASM performance (see https://github.com/krausest/js-framework-benchmark). Higher-level WASM implementations may incur greater costs. Therefore, in the long run, the latest WASM UI library has the best performance.


Myth # 4: Web components can replace frames or libraries


Web components are a set of technologies that use only HTML elements to make HTML, CSS, and JavaScript modular in a reusable way. These technologies add functionality to the DOM that wasn’t there before, including templates, CSS encapsulation, element lifecycle hooks (including attribute monitoring), and child element insertion. On the surface, these are all the same functions that can be implemented through a UI framework, which might eventually lead to components. So, do they have to be equivalent?


Not necessarily. One is a native tool for solving common problems, and the other is a set of custom features that make developing applications more efficient. The confusion is understandable. Sometimes, I’m not even sure that the people involved in writing the proposal know exactly where the line is. Some of the recommendations look like specifications for the next framework.


But I’ve been following this issue for the past six years, and the speed at which consensus can be reached among vendors, it’s clear that the more basic the feature set, the more likely it is to move forward. There is still a lot of debate about what role they should play. For example, support for Native Built-in Elements (extending the capabilities of existing Elements) is not fully supported. This alone means that some people think these are not appropriate for designing systems.


So what are they good for?


Source: Pexels


Seems like it could be used for tiny front-end or widget packaging. Users can set their own accessibility, localization, and form-handling capabilities, and design their own elements. To be clear, this is not the same as the React component. These components can be modular, but no changes are propagated or rendered efficiently. Their boundaries are distinct and represent encapsulation isolation throughout the life cycle. Everyone can place their UI libraries next to each other. In fact, almost all web component libraries are libraries of this nature, requiring learning frameworks and libraries. You can use a pure DOM API, but you can now operate without a Web component. Be careful not to be cheated. Components can be used with libraries you are already familiar with, such as exporting custom Elements using Angular Elements, Svelte, or Vue, or with libraries you are learning about, such as Polymer, Stencil, Heresy, or LitElement.


Nothing has changed. Using these libraries does not suddenly support the open web. In fact, the most freeform approach might be something like SkateJS, which really doesn’t publish frameworks and can use any existing frameworks. The library is only used to homogenize the API surface and allow users to work their way through it. However, frameworks/libraries are still used.


Myth # 5: The Virtual DOM is pure overhead


I cannot deny that this is not true. But this is misleading. Not all ordinary JS are pure overhead. And compiling isn’t just about optimizing code. As myth 2 says, there is still running time. Therefore, no matter how you attack it, you need to manage updates and changes to the DOM. In general, all modern data-driven UI libraries work in one of three ways *, each with its own strengths and weaknesses. People can create working versions of each method. In fact, there has been some libraries, beyond the scope of the function of the library in popular library also shows the size of each method and performance characteristics (see https://github.com/krausest/js-framework-benchmark).


The Virtual DOM, along with the others, is a fully viable approach and remains the most popular use of libraries, even weighing in on the libraries that produce web page components. Most benchmarks are set up in a way that libraries do all the work in a single component. In a real project, it could be modularized into multiple components. There are costs, too, but they are not discussed much. Virtual DOM can generally be extended to more components than other approaches. As a result, people’s perceptions of actual performance may differ.


In my opinion, the reason for not choosing Virtual DOM is not performance or size. Rather, users prefer extensions to the developer experience — composition patterns, variability versus immutability, code structure, and so on. But just because some technique is difficult doesn’t mean it’s impossible. The React Fiber and Hooks show that the Virtual DOM library behaves almost like the reactive library. Of course, KnockoutJS had these features back in 2010, but that doesn’t diminish the potential.


The three approaches here are VirtualDOM, DOM coordination, and responsiveness.


The foundation of the VirtualDOM library is to use virtual trees that generate differences from previous iterations and patch DOM updates. The VirtualDOM library uses immutability and reference equality for shortcut optimization. However, immutability leads to a lot of cloning and memory allocation. Examples: React,Vue, Inferno.


The DOM coordination library hides binding values when creating DOM nodes. With each update, the PREVIOUS value is compared and the DOM is updated. The DOM coordination library is similar to the VirtualDOM library, except that it works only once and only compares between leaves. However, due to variability, there is always a need to distinguish on the leaf, so there are fewer ways to optimize shortcuts in deeply nested structures. For example: Angular, Polymer, lit-html.


The reaction library generates reaction diagrams when DOM nodes are created. In this way, each binding context can be associated with an event subscription so that only handlers of related events are run when data is updated. This approach is somewhat optimized because it requires minimal variance but has the largest initial creation cost. For example: Svelte, Knockout, Solid.

Source: Pexels


Leave a comment like follow

We share the dry goods of AI learning and development. Welcome to pay attention to the “core reading technology” of AI vertical we-media on the whole platform.



(Add wechat: DXSXBB, join readers’ circle and discuss the freshest artificial intelligence technology.)