We’ve already covered a number of front-end and computer-based topics, such as SQL, object-oriented, and multithreading, but this one will discuss data structures and algorithms, using some of the examples I’ve encountered.

1. The recursion

Recursion is to tune yourself, recursion is a more common algorithm in the front end. Let’s say you have a bunch of data to process, and you can’t call the next request until the last one is done. One is to use promises, as mentioned in the front End and SQL article. But sometimes you don’t want to introduce promises, so make them simple first. At this point, recursion can be used, as shown in the following code:

var ids = [34112.98325.68125];
(function sendRequest(){
    var id = ids.shift();
    if(id){
        $.ajax({url: "/get".data: {id}}).always(function(){
            //do sth.
            console.log("finished");
            sendRequest();
        });
    } else {
        console.log("finished");
    }
})(); 
Copy the code

The above code defines a sendRequest function that calls itself after the request completes. Each call is preceded by a data fetch. If the array is empty, the processing is complete. This allows serial requests to be unclogged in a simple way.

Let’s talk about another scenario: DOM trees.

Since the DOM is a tree, and the definition of the tree itself is defined recursively, it is very simple and natural to deal with the tree recursively. For example, use recursion to implement a DOM lookup function document.getelementByID.

function getElementById(node, id){
    if(! node)return null;
    if(node.id === id) return node;
    for(var i = 0; i < node.childNodes.length; i++){
        var found = getElementById(node.childNodes[i], id);
        if(found) return found;
    }
    return null;
}
getElementById(document."d-cal");Copy the code

Document is the root of the DOM tree, and you usually start at document and go down. So in the for loop we find all of the children of the document, and we recursively look for all of their children, layer by layer down. If we’re at the leaf and we haven’t found it, we return null in the second line of code, and then we return I plus 1 for the loop, and we go on to the next child. If the id of the current node matches the search criteria, the layer by layer is returned. So it’s a depth-first traversal, going from the root to the leaf, and then back up.

Finally, verify on the console, and the execution result is as follows:

The advantage of using recursion is that the code is easy to understand, but the disadvantage is that it is less efficient than non-recursive implementations. Chrome’s DOM lookup is implemented using a non-recursive implementation. How do you implement non-recursion?

The following code:

function getByElementId(node, id){
    // Walk through all nodes
    while(node){
        if(node.id === id) return node;
        node = nextElement(node);
    }
    return null;
}Copy the code

Again, all DOM nodes are iterated, but this time in a while loop, where nextElement is responsible for finding the next node. So the key is how this nextElement is implemented non-recursively, as shown in the following code:

function nextElement(node){
    if(node.children.length) {
        return node.children[0];
    }
    if(node.nextElementSibling){
        return node.nextElementSibling;
    }
    while(node.parentNode){
        if(node.parentNode.nextElementSibling) {
            return node.parentNode.nextElementSibling;
        }
        node = node.parentNode;
    }
    return null;
}Copy the code

If it has children, then the next element is its first child. If it has children, then it determines whether it has adjacent elements. If so, it returns its next adjacent element. If it has neither child nor next neighbor, it returns the next neighbor of its parent, equivalent to the I + 1 of the for loop in the recursive implementation above.

Running this code in the console will output the correct results as well. Whether nonrecursive or recursive, they are depth-first traversals, as shown in the figure below.

GetElementById is actually stored using a hash map that maps directly to DOM nodes by ID, whereas getElementsByClassName uses such a non-recursive lookup.

Select by id, class, etc.

2. DOM lookup of complex selectors

To implement a document.querySelector:

document.querySelector(".mls-info > div .copyright-content")Copy the code

First, the complex selector is parsed in the following format:

// Resolve selector to
var selectors = [
{relation: "descendant".matchType: "class".value: "copyright-content"},
{relation: "child".matchType: "tag".value: "div"},
{relation: "subSelector".matchType: "class".value: "mls-info"}];Copy the code

From right to left, the first selector is… copyright-content, it’s a class selector, so its matchType is class, it’s ancestor and descendant with the second selector, so its relation is descendant; Similarly, the matchType of the second selector is tag, and relation is child, representing the direct child of the third selector; The third selector is also class, but it doesn’t have the next selector, relation is called a subSelector.

MatchType is used to compare whether the current selector matches, as shown in the following code:

function match(node, selector){
    if(node === document) return false;
    switch(selector.matchType){
        // If it is a class selector
        case "class":
            return node.className.trim().split(/ +/).indexOf(selector.value) >= 0;

        // If it is a label selector
        case "tag":
            return node.tagName.toLowerCase() === selector.value. toLowerCase();

        default:
            throw "unknown selector match type"; }}Copy the code

Make different matches according to different matchtypes.

When matching, each selector is compared from right to left. When comparing the next selector, the corresponding DOM node needs to be found. If the current selector is the descendant of the next selector, all the ancestor nodes of the current selector need to be compared, up to the document; If it is a direct child, compare its parent. So we need a function that finds the next target node:

function nextTarget(node, selector){
    if(! node || node ===document) return null;
    switch(selector.relation){
        case "descendant":
            return {node: node.parentNode, hasNext: true};
        case "child":
            return {node: node.parentNode, hasNext: false};
        case "sibling":
            return {node: node.previousSibling, hasNext: true};
        default:
            throw "unknown selector relation type";
          //hasNext indicates whether the current selector Relation is allowed to proceed to the next node}}Copy the code

With the nextTarge and match functions, you can start traversing the DOM as shown in the following code:

The outermost while loop, like the simple selector, iterates through all the DOM nodes. For each node, check whether the first selector matches, and if it doesn’t match, move on to the next node. If it’s not a tag selector, most nodes will fail here. If the first selector matches, it finds the next target according to the relation of the first selector and determines whether the next targe matches the next selector. As long as one target matches, it exits the inner while loop and continues with the next selector. If all of the selectors match, it’s a match. If there is a selecotr with no match for any of its targets, the match fails, the selector for loop exits, and the next DOM node is matched from scratch.

This enables DOM lookup of a complex selector. The purpose of this article is not to write a DOM lookup function and use it, but to understand how the DOM lookup process works, how it can be implemented, and how the browser implements it. And how you can traverse the DOM tree, and when you understand the process, and you run into a similar problem, you can draw a parallel.

Finally, run it in your browser, as shown below:

3. Repeat value processing

Now there is a problem, as shown below:

As you drag down the map, update the housing label data on the map. In the above image, the green boxes indicate the same labels, while the yellow boxes indicate new properties.

The — backend will return the houses in the visible area of the current map to me every time. When the user drags, he needs to know which houses are the existing ones and which ones are newly added. Add new properties and delete those outside the area, leaving existing properties untouched. It is therefore necessary to compare the current listings with the new results which are duplicated. Because if you don’t do this, instead of deleting everything and redrawing it, the existing listing label will flash. So do an incremental update to avoid flashing.

To abstract the problem: — Given two arrays, find the duplicate and non-duplicate values in the first array. That is, there is an array to hold the last state of the housing, and another array to hold the current state of the new housing data. Duplicate values are found to be retained, non-duplicate values are found to be deleted.

The most intuitive way is to use a double loop.

(1) Double cycle

The following code looks like this:

var lastHouses = [];
filterHouse: function(houses){
    if(lastHouses === null){
        lastHouses = houses;
        return {
            remainsHouses: [].newHouses: houses
        };  
    }   
    var remainsHouses = [], 
        newHouses = []; 

    for(var i = 0; i < houses.length; i++){
        var isNewHouse = true;
        for(var j = 0; j < lastHouses.length; j++){
            if(houses[i].id === lastHouses[j].id){
                isNewHouse = false;
                remainsHouses.push(lastHouses[j]);
                break; }}if(isNewHouse){
            newHouses.push(houses[i]);
        }   
    }   
    lastHouses = remainsHouses.concat(newHouses);
    return {
        remainsHouses: remainsHouses,
        newHouses: newHouses
    };  
}
Copy the code

The code above has a double for loop. For each element of the new data, it checks whether the old data already exists. If there is a duplicate value, it indicates that the old data is new. Because of the double loop, the time complexity of this algorithm is O(N2), which is fine for hundreds of data, but may be difficult for thousands of data, because the worst case comparison is 1,000,000 times.

— (2) use Set

The following code looks like this:

var lastHouses = new Set(a);function filterHouse(houses){
    var remainsHouses = [],
        newHouses = [];
    for(var i = houses.length - 1; i >= 0; i--){
        if(lastHouses.has(houses[i].id)){
            remainsHouses.push(houses[i]);
        } else{ newHouses.push(houses[i]); }}for(var i = 0; i < newHouses.length; i++){
        lastHouses.add(newHouses[i].id);
    }
    return {remainsHouses: remainsHouses, 
            newHouses: newHouses};
}Copy the code

LastHouses changed the storage of old data from arrays to sets, but what if you start with arrays, like two arrays? Initialize a Set with the data from this array.

The difference between using Set and Array is that you can reduce a loop by calling set.prototype. has. A Set is generally implemented using a red-black tree, which is a balanced lookup binary tree whose search time complexity is O(logN). So the time is improved, from O(N) to O(logN), and the total time is changed from O(N2) to O(NlogN). In fact, Chrome V8’s Set is implemented with a hash, which is a hash Set with a lookup time of O(1), so the overall time complexity is O(N).

Both O(NlogN) and O(N) appear to have less time than O(N2). But actually the thing to notice is that they have a coefficient in front of them. Using Set to update lastHouses later also takes time:

for(var i = 0; i < newHouses.length; i++){
    lastHouses.add(newHouses[i].id);
}Copy the code

If Set is implemented by tree, the time complexity of this code is O(NlogN), so the total time is O(2NlogN). However, because large O does not consider the coefficient, O(2NlogN) is still equal to O(NlogN). When the amount of data is relatively small, this coefficient will play a great role. In large numbers, exponential growth of O(N2) will far exceed this coefficient, and the same is true for hashing. So when the data volume comparison is small, such as only 100 or 200 can be directly used for double cycle processing.

The above code is a bit verbose, but we can rewrite it with the new ES6 features to make it more concise:

function filterHouse(houses){
    var remainsHouses = [],
        newHouses = []; 
    houses.map(house= > lastHouses.has(house.id) ? remainsHouses.push(house) 
                        : newHouses.push(house));
    newHouses.map(house= > lastHouses.add(house.id));
    return {remainsHouses, newHouses};
}Copy the code

The code has halved from 16 to 8 lines.

(3) Use Map

Using Map is similar, as shown below:

var lastHouses = new Map(a);function filterHouse(houses){
    var remainsHouses = [],
        newHouses = [];
    houses.map(house= > lastHouses.has(house.id) ? remainsHouses.push(house)
			: newHouses.push(house));
    newHouses.map(house= > lastHouses.set(house.id, house));
    return {remainsHouses, newHouses};
}Copy the code

The search complexity of a hash is O(1), so the total time complexity is O(N), as is the case with sets/maps, at the expense of storing hash space that is usually twice the size of the data

(4) Time comparison

Finally, a time comparison is made, for which some data must be created first. When N = 100, 1000 and 10000 are compared, N/2 ids are repeated, and the other half ids are different. Generated with the following code:

var N = 1000;
var lastHouses = new Array(N);
var newHouses = new Array(N);
var data = new Array(N);
for(var i = 0; i < N / 2; i++){
    var sameNumId = i;
    lastHouses[i] = newHouses[i] = {id: sameNumId};
}
for(; i < N; i++){
    lastHouses[i] = {id: N + i};
    newHouses[i] = {id: 2 * N + i};
}Copy the code

We then need — to randomly distribute the duplicate data, using the following function to randomly distribute the elements of an array:

/ / break up
function randomIndex(arr){
    for(var i = 0; i < arr.length; i++){
        var swapIndex = parseInt(Math.random() * (arr.length - i)) + i;
        var tmp = arr[i];
        arr[i] = arr[swapIndex];
        arr[swapIndex] = tmp;
    }
}
randomIndex(lastHouses);
randomIndex(newHouses);Copy the code

Set/Map data:

var lastHousesSet = new Set(a);for(var i = 0; i < N; i++){
    lastHousesSet.add(lastHouses[i].id);
}

var lastHousesMap = new Map(a);for(var i = 0; i < N; i++){
    lastHousesMap.set(lastHouses[i].id, lastHouses[i]);
}Copy the code

Repeat 100 times, respectively, and compare the time:

console.time("for time");
for(var i = 0; i < 100; i++){
    filterHouse(newHouses);
}
console.timeEnd("for time");

console.time("Set time");
for(var i = 0; i < 100; i++){
    filterHouseSet(newHouses);
}
console.timeEnd("Set time");

console.time("Map time");
for(var i = 0; i < 100; i++){
    filterHouseMap(newHouses);
}
console.timeEnd("Map time");Copy the code

With Chrome 59, when N = 100, the time is: for < Set < Map, as shown below, three times:

When N = 1000, the time is: Set = Map < for, as shown below:

When N = 10000, the time is Set = Map << for, as shown below:

As you can see, the time of Set and Map is basically the same. When the amount of data is small, the time of for is less, but when the amount of data is large, the Set and Map are more advantageous, because exponential growth is scary. This raises the question of how Set/Map is implemented.

4. V8 hash implementation of Set and Map

We study the Chrome V8 to Set/Map implementation, the source code is in Chrome/SRC/V8 / SRC/js/collections. Js inside this file, because the Chrome has been updated iteration, so it’s possible the realization of the Set/Map will change after, So let’s see how we do that now.

Initialize a Set with the following code:

var set = new Set(a);// The data is 20 numbers
var data = [3.62.38.42.14.4.14.33.56.20.21.63.49.41.10.14.24.59.49.29];
for(var i = 0; i < data.length; i++){
    set.add(data[i]);
}
Copy the code

So what is the data structure of this Set, how does it hash?

A key aspect of hashing is the hash algorithm, which is to hash a bunch of numbers or strings to get their random values. V8’s numeric hashing algorithm looks like this:

function ComputeIntegerHash(key, seed) {
  var hash = key;
  hash = hash ^ seed;  //seed = 505553720
  hash = ~hash + (hash << 15);  // hash = (hash << 15) - hash - 1;
  hash = hash ^ (hash >>> 12);
  hash = hash + (hash << 2);
  hash = hash ^ (hash >>> 4);
  hash = (hash * 2057) | 0;  // hash = (hash + (hash << 3)) + (hash << 11);
  hash = hash ^ (hash >>> 16);
  return hash & 0x3fffffff;
}Copy the code

Apply various bits to the number to get a relatively random number, and scatter the rows as follows:

var capacity = 64;
var indexes = [];
for(var i = 0; i < data.length; i++){
    indexes.push(ComputeIntegerHash(data[i], seed) 
                      & (capacity - 1)); // Remove the high position
}
console.log(indexes)Copy the code

The purpose of scattering is to find which index of the array this number is placed in.

Since there are 20 numbers, the capacity increases from 16 and doubles each time to 64, ensuring that capacity > size * 2, because only when the capacity is twice the actual storage size, can the scattering result repeat value be relatively low.

The calculation results are as follows:

It can be seen that the scattering result is relatively uniform, but there are still repeated values, such as 14 repeated three times.

Key = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56 hash = 56

function SetHas(key){
    var index = ComputeIntegerHash(56, seed) & this.capacity;
    // There may be duplicate values, so we need to verify that the hit index holds the same key
    returnsetArray[index] ! = =null 
              && setArray[index] === key;
}
Copy the code

The above is a typical implementation of the hash storage structure, but Chrome V8’s Set/Map implementation is slightly different.

The hash algorithm is the same, except that capacity is not used to remove the high points, but half of capacity, called buckets, is used as data:

var data = [9.33.68.57];Copy the code

Since buckets = 2 is initialized, the result is as follows:

Because buckets is small, the scattering value is repeated a lot. One of the four numbers is repeated three times.

Now insert the data one by one and observe the change in the Set data structure.

(1) The insertion process

A) insert the 9

As shown in the following figure, the storage structure of Set is divided into three parts. The first part has three elements, representing the number of valid elements, the number of deleted elements, and the number of buckets. The first two elements are added together to represent the total number of elements. Bucket [0] represents the index of the original data stored in the first bucket. Bucket [0] represents the index of the original data stored in the first bucket, which is called entry in the source code. So bucket[0] = 0. The third part is the space for recording the key value. The entry of 9 is 0, so it is placed at 3 + buckets. Length + Entry * 2 = 5.

B) insert 33

Now insert 33, bucket = 1, entry = 1, so it looks like this:

Insert 68 c)

The bucket value of 68 is 1, which is the same as that of 33, because entry = buckets[1] = 1 is not empty, which indicates that the bucket value of 68 is 3 + buckets. Length + Entry * 2 = 7. In other words, the previous number is placed at the location of the array 7, so the adjacent element of 68 holds the value keyChain 7, and the entry of bucket[1] becomes 68 is 2, as shown in the following figure:

D) insert 57

Insert 57 in the same way. The value of bucket[1] is 1, and bucket[1] = 2, so the adjacent element of 57 stores 3 + 2 + 2 * 2 = 9, pointing to 9, as shown in the following figure:

(2) Search

Bucket [1] = 3; bucket[1] = 3; bucket[1] = 3; bucket[1] = 3; Index = 10, index = 10, index = 10, index = 10, index = 10, index = 10, index = 10, index = 10

The key value is compared 3 times, and the adjacent keyChain values are compared 2 times. The total number of comparisons is 5 times, which is more than the number of comparisons in a for loop. Therefore, when the amount of data is small, the storage speed of using hash is slower, but when the amount of data is large, the advantage will be obvious.

(3) Capacity expansion

When — continues to insert the fifth number, it is found that the capacity is not enough and needs to be further expanded. The capacity will be increased to 2 times, the number of buckets will become 4, and all elements will be scattered again.

The scattering capacity of —Set, that is, the value of bucket, is half of the actual element. There will be a lot of scattering conflicts, but its storage space will be relatively small. If I have N elements, I’m going to need 3 + N / 2 + 2 * N, so it’s going to take a lot of space, so it’s going to take space for time.

(4) Implementation of Map

— is basically the same as Set. The difference is that map has more places to store values, as follows:

var map = new Map(a); map.set(9."hello");Copy the code

The generated data structure is as follows:

Of course, it is not the string “hello” directly stored, but the address of the pointer to hello, which points to the memory location of the actual hello.

(5) Comparison with JS Object

—JSObject is also primarily hashed, which I discussed in the chapter “Implementing JSObject from Chrome source code”.

Unlike JS Maps, JSObject has twice the capacity of the elements, which is a typical implementation of the hash described above. The storage structure is also different. There is an array for keys and an array for values. If a key is found, the index of the key is retrieved from another array to fetch the value. When a hash conflict occurs, the next search location is directly calculated based on the current index:

inline static uint32_t FirstProbe(uint32_t hash, uint32_t size) { 
  return hash & (size - 1);
}

inline static uint32_t NextProbe(
    uint32_t last, uint32_t number, uint32_t size) { 
  return (last + number) & (size - 1);
}Copy the code

Similarly, the next place to look up the key is the same.

I’ve been talking about hashing numbers. How do you hash real strings?

(6) Hash calculation of string

The unicode encoding for each character of the string is processed as follows:

uint32_t AddCharacterCore(uint32_t running_hash, uint16_t c) {
  running_hash += c;
  running_hash += (running_hash << 10);
  running_hash ^= (running_hash >> 6);
  return running_hash;
}

uint32_t running_hash = seed;
char *key = "hello";
for(int i = 0; i < strlen(key); i++){
    running_hash = AddCharacterCore(running_hash, key[i]);
}Copy the code


Let’s move on to a classic topic

5. Array deduplication

Given an array, remove duplicate values:

var a = [3.62.3.38.20.42.14.5.38.29.42]; Output [3.62.38.20.42.14.5.29];
Copy the code

Method 1: Use Set + Array

The following code looks like this:

function uniqueArray(arr){
    return Array.from(new Set(arr));
}Copy the code

Running on the console:

Advantages: simple code, fast speed, time complexity of O(N)

Disadvantages: Requires an additional Set and Array storage space, space complexity is O(N)

Method 2: Use splice

The following code looks like this:

function uniqueArray(arr){
    for(var i = 0; i < arr.length - 1; i++){
        for(var j = i + 1; j < arr.length; j++){
            if(arr[j] === arr[i]){
                arr.splice(j--, 1); }}}return arr;
}Copy the code

Advantages: No additional storage required, space complexity O(1) Disadvantages: Frequent memory movement required, double loop, time complexity O(N2)

Note that splice removes elements like this, which I discussed in detail in “Implementing JS Array from Chrome source code” :

It uses memory to move around, not write a for loop and copy it one by one. The speed of memory movement is still very fast, the fastest 1s can reach 30GB, as shown in the figure below:

Method 3: Use only Array

The following code looks like this:

function uniqueArray(arr){
    var retArray = [];
    for(var i = 0; i < arr.length; i++){
        if(retArray.indexOf(arr[i]) < 0){ retArray.push(arr[i]); }}return retArray;
}Copy the code

Time is O(N2), space is O(N).

Method 4: Use Object + Array

The following code is a de-implement of GOOg.array:

The difference with method 3 is that it does not use array.indexof to determine whether the index exists or not. Instead, Object[key] is used for hash search, so its time complexity is O(N) and space complexity is O(N).


Finally, make a comparison of execution time, N = 100/1000/10000, repeat 1000 times respectively, get the following table:

Object + Array takes the most time, splice takes the most time (it saves space), and Set + Array takes less time than Array (O(N2) splice and Array (O(N2)) when the data volume is large. The Array time is smaller than splice for frequent memory movement operations.

—

In the actual coding process, 1, 2 and 4 are all acceptable —

— method 1 is one line of code

The — method 2 can be used to add an array.prototype. unique function

— Method 4 applies to a large amount of data


Now that we’ve discussed hash data structures, let’s talk about downstacks and heaps

6. The stack and heap

(1) Stack of data structures

The characteristics of the stack is advanced and then out, only push and POP two functions can operate the stack, respectively, push and pop, and the top function to view the top element of the stack. A typical use of stacks is for open and close notation processing, such as DOM construction. There is the following HTML:

<html>
<head></head>
<body>
    <div>hello, world</div>
    <p>goodbye, world</p>
</body>
</html>Copy the code

We will build a DOM like this:

The figure above omits the document node, and we only care about the DOM parent-child relationship, omitting the sibling relationship.

First serialize the HTML into tags, as shown below:

1 html ( 2 head ( 3 head ) 4 body ( 5 div ( 6 text 7 div ) 8 p ( 9 text 10 p ) 11 body ) 12 html)

The open bracket indicates an open label, and the close bracket indicates a closed label.

As shown in the figure below, both HTML and HEAD tags are open tags, so push them on the stack and instantiate an HTMLHtmlElement and HTMLHeadElement object. When dealing with the head tag, the parent element of the head is HTML because the top element of the stack is HTML.

Process the remaining elements as shown below:

So when you hit the third tag that’s a head tag, you pop the head tag out, and the top element becomes HTML, so when you hit the first tag body, the HTML element is the parent of the body tag. Other nodes are processed similarly.

The above process, which I discussed in “How to build a DOM tree in a Browser from Chrome source code,” is illustrated in a diagram that may be more intuitive.

(2) Memory stack

The function pushes local variables onto a stack as follows:

function foo(){
    var a = 5,
        b = 6,
        c = a + b;
}
foo();Copy the code

The structure of a, B and C variables in the memory stack is shown in the figure below:

When a is encountered, a is pushed onto the stack, then B and C, and the local variables in the function are continuously pushed onto the stack, and the memory grows to the lower level. Stackoverflow can occur when a function defines 10 variables that use 8B and recurses up to 8MB/(8B * 10) = 800,000 times

This is discussed in the WebAssembly and Program Compilation article.

(3) the heap

— Heaps in data structures are usually binary trees represented by arrays, such as heap sort and small heap sort. The heap in memory is where new is stored to dynamically create variables, as opposed to the stack, as shown below:

After discussing stacks and heaps, let’s look at a more practical technique.

6. The throttle

Throttling is a common problem at the front end, because you don’t want events like resize/mousemove/scroll to trigger too fast, for example, a callback every 100ms at most is ok. The following code listens to the resize event without throttling:

$(window).on("resize", adjustSlider);Copy the code

Since adjustSlider is a very time consuming operation, I don’t want it to work that fast, 500ms at most is fine. So how do you do that? With setTimout and a tId flag bit, as shown below:


Finally, let’s talk about images and graphics processing.

7. Image processing

For example, after the user selects a local image, click a button to make the image gray:

The effect is as follows:

One way to do this is to use the FILTER property of CSS, which supports graying images:

img{
    filter: grayscale(100%);
}Copy the code

Since the real image data needs to be passed to the back end, the image data needs to be processed. We can use canvas to get image data, as shown in the following code:

<canvas id="my-canvas"></canvas>Copy the code

JS processing is as follows:

var img = newImage(); . Img SRC = "/ test. JPG";// Through FileReader etc
img.onload = function(){
    // Draw it on canvas at position x = 10, y = 10
    ctx.drawImage(this.10.10);
}

function blackWhite() {
    var imgData = ctx.getImageData(10.10.31.30);
    ctx.putImageData(imgData, 50.10);
    console.log(imgData, imgData.data);
}Copy the code

The effect of this is to draw a second picture of the same image, as shown below:

ImgData () {imgData (); It is a one-dimensional array that holds the RGBA values for each pixel from left to right and top to bottom, as shown below:

The size of this image is 31 * 30, so the size of the array is 31 * 30 * 4 = 3720, and since this image has no transparent channels, the value of a is 255.

There are two commonly used ashing algorithms:

— (1) Average

Gray = (Red + Green + Blue) / 3

— (2) According to the perception degree of human eyes to the three primary colors: green > red > blue

Gray = (Red * 0.3 + Green * 0.59 + Blue * 0.11)

The second method is more realistic. We use the second method, as shown in the following code:

function blackWhite() {
    var imgData = ctx.getImageData(10.10.31.30);
    var data = imgData.data;
    var length = data.length;
    for(var i = 0; i < length; i += 4) {var grey = 0.3 * data[i] + 0.59 * data[i + 1] + 0.11 * data[i + 2];
        data[i] = data[i + 1] = data[i + 2] = grey;
    }
    ctx.putImageData(imgData, 50.10);
}Copy the code

The effect of execution is shown below:

Other filter effects, such as blur, sharpening, stain removal, etc., readers can continue to find information.

There is also a graphical algorithm

8. Graphic algorithms

The intersection of two polygons needs to be calculated as follows:

This involves graphics algorithm, graphics algorithm can be considered to be the processing of vector graphics, and image processing is to all rGBA value processing. A new Algorithm for Computing Boolean Operations on Polygons


Based on the above, this paper discusses several topics:

  1. — recursion and DOM lookup
  2. — Implementation of Set/Map
  3. — several methods of array deduplication comparison
  4. — stack and the heap
  5. — throttling
  6. — Image processing

This paper analyzes and summarizes some algorithms from the perspective of the front end, only listing some that I think are more important, and many others are not mentioned. Algorithms and data structures are an eternal topic, and their purpose is to solve problems with the smallest amount of time and space. However, sometimes it is good to solve the problem appropriately without being too attached to the optimal answer, and different strategies may be adopted for different application scenarios. If, on the other hand, your code is prone to three-or-four loops and has a lot of nested if-else’s, you may want to consider using appropriate data structures and algorithms to optimize your code.