preface

Within the company last year made a canvas to share, or canvas meeting is more appropriate, but because all because of the public or private, there has been no share things and summarize the article to you, I’m very sorry to sharing the purpose of this article is to make students have a comprehensive understanding about canvas, please donate said, march!

introduce

Canvas is an HTML element that can use scripts (usually Javascript, but others such as Java Applets or JavaFX/JavaFX Script) to draw graphics. The default size is 300 pixels ×150 pixels.

<canvas style="background: purple;"></canvas>
Copy the code

A profound

<! -- canvas -->
<canvas id="canvas"></canvas>
<! -- javascript -->
<script>
  const canvas = document.getElementById('canvas')
  const ctx = canvas.getContext('2d')
  ctx.fillStyle = 'purple'
  ctx.fillRect(0.0.300.150)
</script>
Copy the code

After the above hellish learning, I believe that you are now proficient in canvas. Next, I will introduce a lot of cases, I can think of all the list, and, combined with its principles, for the students to introduce one by one.

The application case

Examples are as follows:

  • animation
  • The game
  • Video (omitted because the production environment is not mature)
  • screenshots
  • Composite image
  • Share A Screenshot
  • A filter
  • cutout
  • Rotation, scaling, displacement, deformation
  • The particle

animation

The API is introduced

requestAnimationFrame

This method tells the browser that you want to perform the animation and asks the browser to call the specified function to update the animation before the next redraw. This method takes as an argument a callback function that is called before the browser redraws.

RequestAnimationFrame advantages

1. Avoid completely relying on the drawing frequency of the browser to avoid over-drawing and affecting battery life. 2. Improve performance When a Tab or hidden iframe is paused.

Demo

Square mobile

<! -- canvas -->
<canvas id="canvas" width="600" height="600"></canvas>
<! -- javascript -->
<script>
  const canvas = document.getElementById('canvas')
  const ctx = canvas.getContext('2d')
  ctx.fillStyle = 'purple'
  const step = 1    // The length of each step
  let xPosition = 0 / / x coordinate
  move()            // call move
  function move() {
    ctx.clearRect(0.0.600.600)
    ctx.fillRect(xPosition, 0.300.150)
    xPosition += step
    if (xPosition <= 300) {
      requestAnimationFrame(() = > {
        move()
      })
    }
  }
</script>
Copy the code

The game

The three elements

Three elements of a personal game summary:

  • Object abstraction
  • requestAnimationFrame
  • Slow function

Object abstraction: The abstraction of characters in games. Object-oriented thinking is very common in games. For example, let’s abstract a slime from The Dragon:

class Slime {
  constructor(hp, mp, level, attack, defence) {
    this.hp = hp
    this.mp = mp
    this.level = level
    this.attack = attack
    this.defence = defence
  }
  bite() {
    return this.attack
  }
  fire() {
    return this.attack * 2}}Copy the code

RequestAnimationFrame: We’ve already touched on this API before, and with the animation example above, it’s easy to see how the game works.

Slow motion function: We know that the animation of uniform motion will look very unnatural. To become natural, we have to speed up and slow down sometimes, so that the animation will become more flexible and less rigid.

Demo

If you are interested, you can watch a little game I wrote before. Project address: github.com/CodeLittleP…

screenshots

The API is introduced

drawImage(image, sx, sy [, sWidth, sHeight [, dx, dy, dWidth, dHeight]])

Drawing image method.

toDataURL(type, encoderOptions)

The data URI () method returns a data URI containing the image presentation. You can use the type parameter, which is PNG by default. The image resolution is 96dpi. Note:

  • The method must be under an HTTP service
  • The non-cognate images require CORS support. Set crossOrigin = “” (When the crossOrigin attribute values are not use-credentials, everything is resolved to anonymous, including empty strings, including characters like ‘ABC’)

The difference between canvas.style.width and canvas.width

Compare the Canvas element to a frame:canvas.widthIt’s how you control the size of the frame.canvas.style.widthIs the way to control the size of the picture in the frame.

Demo

The core code

const captureResultBox = document.getElementById('captureResultBox')
const captureRect = document.getElementById('captureRect')
const style = window.getComputedStyle(captureRect)
// Set the canvas canvas size
canvas.width = parseInt(style.width)
canvas.height = parseInt(style.height)
/ / drawing
const x = parseInt(style.left)
const y = parseInt(style.top)
const w = parseInt(img.width)
const h = parseInt(img.height)
ctx.drawImage(img, x, y, w, h, 0.0, w, h)
// Append the image to HTML
const resultImg = document.createElement('img')
// toDataURL must be in the HTTP service
resultImg.src = canvas.toDataURL('image/png'.0.92)
Copy the code

Composite image

The principle of

So if we go back to our previous example, we know that drawImage can draw itself, or it can draw pictures. A canvas is completely a drawing board, and it’s at our disposal. The idea of composition is to draw multiple images on the same canvas (cavans). Do you immediately know what to do next?

Demo

The core code

// Set the canvas size
  canvas.width = bg.width
  canvas.height = bg.height
  / / draw the background
  ctx.drawImage(bg, 0.0)
  // Draw the first character
  ctx.drawImage(
    character1, 100.200,
    character1.width / 2,
    character1.height / 2
  )
  // Draw the second character
  ctx.drawImage(
    character2, 500.200,
    character2.width / 2,
    character2.height / 2
  )
Copy the code

As shown in the picture, the background is a backyard with no one at night, and then go to the Internet to search for two character pictures with transparent backgrounds, and then draw the two pictures on the canvas to form a composite picture.

Share A Screenshot

The principle of

Take the well-known HTML2Canvas as an example, the implementation method is to traverse the whole DOM, and then pull the styles one by one and draw them on the canvas one by one.

Demo

A filter

The API is introduced

getImageData(sx, sy, sw, sh)

Return an ImageData object that describes the pixel data implied by the Canvas area, represented by a rectangle starting at (sx, SY) with a width of sw and a height of sh. Look at this code:

const img = document.createElement('img')
img.src = './filter.jpg'
img.addEventListener('load'.() = > {
  canvas.width = img.width
  canvas.height = img.height
  ctx.drawImage(img, 0.0)
  console.log(ctx.getImageData(0.0, canvas.width, canvas.height))
})
Copy the code

It prints the following data:

A little crazy? Don’t panic. Read on.

Introduction to Data Types

Uint8ClampedArray

A typed array represents an array of 8-bit unsigned integers whose values are fixed between 0 and 255. If you specify a value outside the range [0,255], it will be replaced with 0 or 255; If you specify a non-integer, it will be set to the nearest integer. The contents are initialized to 0. Once created, you can refer to the elements of the array using the object’s methods or use the standard array indexing syntax (marked with square brackets). Look back at this picture:Data is actually pixels, which are grouped into four pixels. 4 in a group, rGBA? (O ゜ – ゜▽゜) In this case, the width of the image x height x4 (w * h * 4) is the sum of all pixels, exactly the length of data.

Mathematical deduction

Given: 924160 = 640 x 316 x 4The array length is length = canvas.width x canvas.height x 4

Given this relationship, we can think of the one-dimensional array as a two-dimensional array and imagine it as a planar graph, as shown below:

A grid represents a pixel

W = image width

H = image height

In this way, we can easily find the position of the point (x, y) in a one-dimensional array. Let’s think about it, point (1, 1) corresponds to index 0, and point (2, 1) corresponds to index 4, and let’s say the width of the image is 2*2, so the index of point (1, 2) is equal to index (2-1)*w + (1-1))*4 = 8.

Index = [(y-1) * w + (x-1)] * 4

Continue API introduction

createImageData(width, height)

CreateImageData is the interface provided by the Canvas when it takes a 2D rendering context (canvas.getContext(‘ 2D ‘)). Creates a new, empty ImageData object of a specific size. All pixels are originally black and transparent. And returns the ImageData object.

putImageData

The putImageData method acts as the Canvas 2D API to draw the data carry map from the given ImageData object. If a dirty rectangle is provided, only the pixels of the rectangle will be drawn. This method does not affect the canvas deformation matrix.

In this section we learned several new apis and then reorganized our math. After the students digest well, we will enter the Demo stage.

Demo

Core code:End result:

cutout

For pure background matting, actually is relatively simple. As mentioned above, we can get the value of each pixel of the entire canvas. So, just change the color value of the solid color to transparent. But this kind of scene is rare, because, the background rarely has the pure color situation, and even if the background is pure color, does not guarantee that the body of the buckled object does not have the same color value with the background situation. Therefore, if you want to deal with complex situations, it is better to recommend the back end to do it, the back end already has mature image processing solutions, such as OpencV. Like Meitu, there is a dedicated image algorithm team, every day to study this aspect. Next, I will introduce the idea of meitu portrait matting.

Attribute is introduced

globalCompositeOperation

Controls the order in which the drawImage is drawn.

Train of thought

We will use the souce-in attribute. As shown in the figure above, this property is used to superimpose the two graphs, taking only the superimposed part. Why do you do that? I thought we agreed meitu was going to let the backend algorithms handle it? Because, in order to adapt to more scenes of portrait matting, algorithm bosses will only process the character image into a mask and return to the front end, and then let the front end process itself. Take a look at the original image:

Take a look at the map returned from the back end:

After obtaining the above map, first make the black transparent; Draw original on canvas first; Set globalCompositeOperation to ‘source-in’; Then draw the Mongolian map after processing; The result is the final matting! This project was made in consultation with @xD-Tayde, a former MeITU boss, thanks ~

Demo

Treatment results:

Rotation, scaling, displacement, deformation

For rotation, scaling, displacement and deformation, canvas context CTX has corresponding API to call, and can also use Martrix to make more advanced changes. Because there are so many things involved, it would be too long to write all of them. Therefore, I directly recommend an article for students to learn — “Canvas Image Rotation and Flip To unlock”.

The particle

abstract

We’ve seen before that we can get every pixel on the canvas. The so-called particle is actually an abstraction of a pixel. It has its own coordinates, its own color values, and can “move” by changing its own properties. So we can think of the particle as an object with coordinates and color values, such as:

let particle = {
  x: 0.y: 0.rgba: '(1, 1, 1, 1)'
}
Copy the code

Demo – A test run

I will redraw a logo of netease Pay with scattered particles. Core code:

// Get pixel color information
  const originImageData = ctx.getImageData(0.0, canvas.width, canvas.height)
  const originImageDataValue = originImageData.data
  const w = canvas.width
  const h = canvas.height
  let colors = []
  let index = 0
  for (let y = 1; y <= h; y++) {
    for (let x = 1; x <= w ; x++) {
      const r = originImageDataValue[index]
      const g = originImageDataValue[index + 1]
      const b = originImageDataValue[index + 2]
      const a = originImageDataValue[index + 3]
      index += 4
      // Shuffle the pixel positions and save them in the returned data
      colors.push({
        x: x + getRandomArbitrary(-OFFSET, OFFSET),
        y: y + getRandomArbitrary(-OFFSET, OFFSET),
        color: `rgba(${r}.${g}.${b}.${a}) `})}Copy the code

Effect:

Demo – Particle animation

The three elements

  • Particle objectification
  • Slow function
  • performance

Particle objectification has already been introduced. The easing function, also mentioned in the previous game, is to make the animation more natural and lively. Performance is a big concern. Because for example, a 500×500 image, the amount of data is 500x500x4=1000000. The animation uses a requestAnimationFrame, which normally refreshes at 60HZ to show a very smooth animation. But now to deal with such a large amount of data, the browser can not resist, naturally caused by the down frequency, resulting in serious animation frame.

For performance, particle animations are often drawn with selectively selected pixels. For example, draw only pixels whose original x-coordinate is even or divisible by 4. For example, only the pixels whose r color value of the original image is above 155 are drawn.

With the above ideas, you can make all kinds of powerful example animations.

Demo

Addresses of all Demo projects

Github.com/CodeLittleP…

The original source

Canvas – A Collection of Apps

Refer to the article

Canvas Particle Animation – Tencent ISUX