What is computer graphics

Rasterization

Computer graphics may be WebGL for our front-end. In fact, raster graphics represented by WebGL is only a part of computer graphics. As can be seen from the figure below, the main work of rasterization is to display the sphere of three-dimensional space on a plane. Since all the work of this sharing is completed on THE CPU, the content of GPU rendering pipeline will not be introduced here.

  • Display geometry in three dimensional space on the screen
  • Game preference (Real-time rendering)

Geometry

The second is geometry, which is how to represent geometric objects in a computer. The picture on the left is familiar to the front end, bezier curve, and we can determine a smooth curve with four feature points, so what this part is doing is presenting a complex graph with some simple parameters.

Ray Tracing

And that brings us to today’s topic, ray tracing. The first thing I want to make clear is that ray tracing and rasterization are actually parallel, they’re both done by putting in objects in the three-dimensional world and then doing a series of calculations to get an image. It’s just that rasterization is done in a number of ways to achieve real-world effects. Ray tracing is a simulation of real physical phenomena, more realistic. The image on the right is the result of ray tracing, and you can see that it is no different from reality.

Animation/simulation

The last one is animation and simulation. The front end is familiar with this. Keyframe animation is keyfarme in CSS. Those of you who know the React-Spring library should know how smooth and comfortable their demo is. Instead of using code to draw animations, use keyframes and some easing functions, and just input the elastic coefficient.

  • Animation of keyframes (keyframes, AE, Lottie)
  • Mass spring system (React – Spring)

Then we can boil computer graphics down to these four pictures, the two on the left are all about making a picture of an object in a three-dimensional world. What geometry does is show complex shapes with simple parameters, like this butterfly. Animation and simulation, on the other hand, are moving objects in the scene.

Why ray tracing

Rasterization does not handle global illumination well

We mentioned earlier that ray tracing and rasterization are parallel, using different methods to achieve the same goal. Why reinvent the method?

One of the main problems with rasterization is that it doesn’t handle global lighting very well, and this is a new concept that global lighting, which is actually real light, because light is reflected and refracted, whereas rasterization can only handle direct light and light that bounces once. As you can see from the image below, it is dark near point P in direct light. We know from experience that this is not the case in the real world, and we can see the details. In the next picture, the light bounces 16 times and lights up near point P, which is more consistent with the real environment.

Rasterization is fast but of low quality

Raster raytracing, of course, has both advantages and disadvantages. Raster raytracing is fast but not very realistic, and is mainly used for real-time rendering, such as in games. Raytracing, like the image below, works well, but is extremely slow, and is mainly used for offline rendering, such as in animated movies.

Ray tracing is particularly realistic, but slow

  • Ray tracing: Offline
  • Rasterization: real time
  • It takes approximately 10K OF CPU time to render a frame

Coordinate space

Now that we have a general understanding of ray tracing, let’s look at the implementation, which we all know is

Algorithm + data structure

Let’s start by creating our special data structures called vectors to describe: coordinates, normals, and colors

  • location: used to determine position in 3D space (x.y.z)
  • Normal: Describes vertex orientation (e.g., a plane that cannot be seen when facing away from the normal)
  • Color: RGB value

The following code is our implementation of a three-dimensional vector class. The main realization of the vector four operations and dot product. Since we’ll be instantiating a lot of this class throughout the process, putting the actual algorithm in the class’s static methods should save a bit of memory. (Since TS has no plan for operator overloading, it is written as a method in a class)

export default class Vec3 {

  /** * the vector class Vec3, which can be used to express a point * coordinates/a direction * or express a color (RGB value) * in three-dimensional space@param e0

   * @param e1

   * @param e2* /

  constructor(public e0 = 0, public e1 = 0, public e2 = 0) {}

  // ---------- Static method start -----------

        / / add

  static add(v1: number | Vec3, v2: number | Vec3): Vec3 {

    return typeof v1 === 'number'

      ? Vec3.add(new Vec3(v1, v1, v1), v2)

      : typeof v2 === 'number'

      ? Vec3.add(v1, new Vec3(v2, v2, v2))

      : new Vec3(v1.e0 + v2.e0, v1.e1 + v2.e1, v1.e2 + v2.e2)

  }

        / / subtraction

  static sub(v1: number | Vec3, v2: number | Vec3): Vec3 {

    return typeof v1 === 'number'

      ? Vec3.sub(new Vec3(v1, v1, v1), v2)

      : typeof v2 === 'number'

      ? Vec3.sub(v1, new Vec3(v2, v2, v2))

      : new Vec3(v1.e0 - v2.e0, v1.e1 - v2.e1, v1.e2 - v2.e2)

  }

        / / the multiplication

  static mul(v1: number | Vec3, v2: number | Vec3): Vec3 {

    return typeof v1 === 'number'

      ? Vec3.mul(new Vec3(v1, v1, v1), v2)

      : typeof v2 === 'number'

      ? Vec3.mul(v1, new Vec3(v2, v2, v2))

      : new Vec3(v1.e0 * v2.e0, v1.e1 * v2.e1, v1.e2 * v2.e2)

  }

        / / division

  static div(v1: number | Vec3, v2: number | Vec3): Vec3 {

    return typeof v1 === 'number'

      ? Vec3.div(new Vec3(v1, v1, v1), v2)

      : typeof v2 === 'number'

      ? Vec3.div(v1, new Vec3(v2, v2, v2))

      : new Vec3(v1.e0 / v2.e0, v1.e1 / v2.e1, v1.e2 / v2.e2)

  }

        / / dot product

  static dot(v1: Vec3, v2: Vec3) {

    return v1.e0 * v2.e0 + v1.e1 * v2.e1 + v1.e2 * v2.e2

  }

  // ---------- Static method end -----------



  // ---------- will mount the method above the instance to start -----------

  add(v: Vec3 | number) {

    return Vec3.add(this, v)

  }



  sub(v: Vec3 | number) {

    return Vec3.sub(this, v)

  }



  mul(v: Vec3 | number) {

    return Vec3.mul(this, v)

  }



  div(v: Vec3 | number) {

    return Vec3.div(this, v)

  }

  // ---------- will mount the method on the instance end -----------



  // unit vector (vector direction)

  unitVec() {

    return this.div(this.length())

  }



  // Vector length

  squaredLength() {

    return this.e0 ** 2 + this.e1 ** 2 + this.e2 ** 2

  }



  length() {

    return this.squaredLength() ** (1 / 2)}}Copy the code

The light

Three ideas about light

  1. Light travels in straight lines (although this is false, wave-particle duality)
  2. If rays cross, they do not “collide” with each other (although this is still wrong)
  3. Light from the light source to the eye (but the reversal is correct, the light path can be reversed)

The light class

You can see from the figure above that all we need to describe a ray is the origin and the direction of the ray. And light has velocity, it accumulates in time, and so we can abstract light into this equation, where we can determine where a light ray is at any given moment.

  • O: Origin (Vec3)
  • D: Direction of ray (vector)
  • T: Time for light to travel
  • R (t): the position of the ray at a certain time

The following code implements the above equation. The class constructor passes in the origin and direction vectors, and the getPoint method implements the above equation.

export default class Ray {

  /** * denotes light *@param Origin represents the coordinates of the starting position of the ray *@param * / direction direction

  constructor(public origin: Vec3, public direction: Vec3) {}



  /** * Calculates the position of the ray at the time parameter t *@param T Time parameter */

  getPoint(t: number) {

    // Equivalent to: the original coordinates of the light plus the distance traveled by the light during that time

    return this.origin.add(this.direction.mul(t))

  }

}
Copy the code

The ball

We can’t see anything in the scene with only light and no objects, and here we chose something that is impossible in the real world: a perfect sphere. But it’s very well described in code.

How do you identify a ball

  • center
  • The radius of
export default class Sphere {

  /** * Ball (required information, ball center and radius) *@param Center coordinates of the sphere *@param The radius radius * /

  constructor(public center: Vec3, public radius: number){}}Copy the code

How the ball and light come into contact (collision)

We all know that most objects in reality don’t emit light like the moon, which reflects light from the sun to allow us to see. And reflected light means that we have an incoming light ray, which hits a point on the object, and then reflects an outgoing light ray.

From our previous work, we’ve abstracted the ray and the sphere into two formulas, and now we just have to put the two equations together to get the solution. We’re going to replace r(t), where the light is at some point, with p in the sphere formula. It can be seen that a quadratic system of equations with one variable is formed here, and its solution is obtained by solving the root formula:

Before we actually write algorithm, we need to solve a problem, how to set the collision 💥 recorded, we construct a class here, there are three parameters, the collision time and location is better understood, the normal we talk about, is above the plane of the ideal Angle of incidence and emergence Angle is the same, then use normal segmentation between them, So we need to record the normal line so that we can calculate the outgoing ray later.

export default class HitRecord {

  /** * Collision record *@param T time parameter t *@param P the coordinate where the collision happened, p star@param Normal Normal */ when the collision occurs

  constructor(

    public t: number = 0,

    public p: Vec3 = new Vec3(0.0.0),

    public normal: Vec3 = new Vec3(0.0.0)

  ){}}Copy the code

The following is the realization of the collision of the specific algorithm, is mainly to achieve a root formula, to find the collision time T, and then in turn to find the collision position, as well as the normal, and then through the light class will be implemented in a method to find the reflected light.

export default class Sphere implements HitableInterface {

        // ...



  hit(ray: Ray, t_min: number, t_max: number): [HitRecord, Ray] | undefined {

    let hit = new HitRecord()



    // The line between the spherical center and the ray origin

    const oc = Vec3.sub(ray.origin, this.center)

    const a = Vec3.dot(ray.direction, ray.direction)

    const b = Vec3.dot(oc, ray.direction) * 2

    const c = Vec3.dot(oc, oc) - this.radius ** 2



    const discriminate = b ** 2 - 4 * a * c



    if (discriminate > 0) {

      let t

      t = (-b - Math.sqrt(discriminate)) / (2 * a)

      if (t > t_min && t < t_max) {

        hit.t = t

        hit.p = ray.getPoint(t)

        // Normal direction

        hit.normal = hit.p.sub(this.center).div(this.radius)



        return [hit, ray.reflect(hit)]

      }

      t = (-b + Math.sqrt(discriminate)) / (2 * a)

      if (t > t_min && t < t_max) {

        hit.t = t

        hit.p = ray.getPoint(t)

        hit.normal = hit.p.sub(this.center).div(this.radius)



        return [hit, ray.reflect(hit)]

      }

    }

  }

}
Copy the code

The camera

Now that we have the space, the light, the ball, we also need a camera to look at the object that we have created. We assumed before that the light was reversible, but now assume the camera as an origin and form a line with a pixel on the canvas. So we have the origin and the direction of the ray. Of which the origin is our own set, and the direction of the light is complicated, we know that two coordinates is presupposed get direction vector, but here only the three-dimensional coordinates of the camera is we know, but the pixels on the canvas is a two-dimensional coordinates, we need to pass some operation and some parameters of the canvas to get three-dimensional coordinates, We just add the horizontal and vertical components of the pixel from the bottom left corner of the canvas, and we get the position of the pixel in 3d space, and then subtract the position of the camera to get the direction of the light.

Create a Camera class, assuming the light starts from the Camera 📷. (where Origin /leftbottom is the coordinate and Horizontal /vertical is the vector)

export default class Camera {

  / * * * *@param Origin coordinates: camera position *@param LeftBottom coordinates: the position of the screen foot *@param Horizontal vector: horizontal vector *@param Vertical vector: the vertical vector */

  constructor(public origin: Vec3, public leftBottom: Vec3, public horizontal: Vec3, public vertical: Vec3) {}



  /** * get light information *@param X x star@param Y, y coordinate */

  getRay(x: number, y: number): Ray {

    return new Ray(

      this.origin,

      this.leftBottom

        .add(this.horizontal.mul(x))

        .add(this.vertical.mul(y))

        .sub(this.origin)

    )

  }

}
Copy the code

The core algorithm

Now that we have all the groundwork, we just need to let our light run through a series of collisions to the light source and get the final color value. So we have the color of one pixel on the canvas, and that’s the basic path tracing. Because collision is a repetitive process, the body of our implementation function is a recursive function.

Path tracking

There is only one light path, which does not divide multiple paths in the scene.

The first thing a recursive function does is it has an exit condition, otherwise it goes on indefinitely. We roughly assume that if a light hits a source 50 times and doesn’t reach it, it’s black. This is similar in the real world, because light loses energy as it bounces, and after 50 times it has very little energy left.

Now let’s look at the subject logic, we will first iterates through all objects in the scene, and calculate the light and the object of recent record of intersection point of collision if we use its emergent light as the parameters of the next recursive handed in, and simple thought here, these magical objects will be half of each wavelength of light absorption of energy. (We know that objects take on different colors because they react differently to different wavelengths of light). If the ray is lucky enough not to touch any object, we think it has reached our light source, and our light source in particular has the ideal of a color gradient on the coordinate axis.

/ * * * *@param Objects in the sence scene *@param R light *@param Step The number of ray bounces */

function trace(sence: HitList, r: Ray, step = 0) :Vec3 {

  // Only 50 steps are counted

  if (step > 50) return new Vec3(0.0.0)



  // Walk through the objects in the scene and calculate the record of the point where the light intersects the nearest object

  // And return reflected light

  const hit = sence.hit(r, 0.0000001.Infinity)

        // The result returned

  let res: Vec3



  if (hit) {

    // ----- Hit object -----

    / / recursion

    res = trace(sence, hit[1], ++step).mul(0.5)}else {

    // ----- Hit background (light source)-----

    // unitDirection unit vector

    const unitDirection = r.direction.unitVec(),

      // Compute a correlation number for y coordinates

      t = (unitDirection.e1 + 1.0) * 0.5



    // Calculate a gradient color value

    res = Vec3.add(new Vec3(1.1.1).mul(1 - t), new Vec3(0.3.0.5.1).mul(t))

  }

  return res

}
Copy the code

From the very beginning

With the basics out of the way, let’s create a simple path tracing scenario from scratch. (The only reason we chose Snowpack over Vite is because it provides a pure typescript template.)

Snowpack

npx create-snowpack-app rey --template @snowpack/app-template-blank-typescript --use-yarn
Copy the code

Create a Canvas scene

<div id="app">

      <div class="processbar">

        <div class="processline" id="processline"></div>

      </div>

      <canvas id="cv"></canvas>

</div>
Copy the code
html.body {

    width: 100%;

    height: 100%;

    margin: 0;

    padding: 0; {} *box-sizing: border-box;

}



#app {

    width: 100%;

    height: 100%;

    position: relative;

    background-color: aquamarine;

}



#app canvas {

    width: 800px;

    height: 400px;

    position: absolute;

    left: 50%;

    top: 50%;

    transform: translate(-50%, -50%);

    box-shadow: 0 4px 12px rgba(0.0.0.0.3)}.processbar {

    width: 100%;

    height: 2px;

    position: absolute;

    top: 0;

    background: #fff;

}



.processline {

    width: 0%;

    height: 100%;

    background-color: #90aeff

}
Copy the code

When rendering, we want to use the computer’s resources as much as possible, so we may have multiple threads to speed things up. The last parameter of initTasks is the number of workers, so there is no problem with blocking UI threads.

const height = 400

const width = 800



const canvas = document.getElementsByTagName('canvas') [0]

canvas.height = height

canvas.width = width



const ctx = canvas.getContext('2d')

constimage = ctx? .createImageData(width, height)as ImageData

const bar = document.getElementById('processline') as HTMLElement

const amount = navigator.hardwareConcurrency - 1



// ...



// Do not execute if you do not see it during development

requestAnimationFrame(() = > {

  initTasks(ctx, width, height, amount)

})
Copy the code

In initTask, we first need to get the total number of Canvas pixels, n, and the number of pixels that each task needs to process, len. Then each pixel is cycled through two layers, with the outer layer as y and the inner layer as x. Each time the loop pushes the newly generated PX instance into the task, the task (performTask) is executed each time the number of pixels in the task equals len or when it reaches the last pixel.

function initTasks(

  ctx: CanvasRenderingContext2D | null,

  width: number,

  height: number,

  amount: number

) {

  // Get canvas pixels n

  const n = width * height

  // The number of pixels per task to process len

  const len = Math.ceil(n / amount)



  // Wrap the pixel set and width

  let task = new RenderTask([], width, height)



  for (let y = 0; y < height; y++) {

    for (let x = 0; x < width; x++) {

      // Then loop each pixel in two layers, with the outer layer as y and the inner layer as x.

      // Each loop pushes the newly generated PX instance to task

      task.pixels.push(new Px(x, y))



      // Whenever the number of pixels inside the task equals len or the last pixel is processed,

      Execute this task (performTask)

      if (task.pixels.length >= len || y * width + x === n - 1) {

        performTask(task, n)

        task = new RenderTask([], width, height)

      }

    }

  }

}
Copy the code

The RenderTask instance records the image size (width, height), and all pixels.

export default class RenderTask {

  constructor(public pixels: Px[], public width: number, public height: number){}}Copy the code

The Px instance records the coordinate information and RGBA value of a point.

export default class Px {

  r: number

  g: number

  b: number

  a: number



  constructor(public x: number, public y: number) {

    this.r = this.g = this.b = this.a = 0}}Copy the code

In the performTask function, we need to create a New Web Worker to execute our function and send the task to work for execution. The taskMsg object is used to receive the results returned by the worker. There are two main methods: one is to obtain the results and render the picture; the other is to shut down the worker after all the results are completed

let complete = 0

function performTask(task: RenderTask, mount: number) {

  const worker = new Worker('./_dist_/task.worker.js', {

    type: 'module',

  })



  worker.postMessage({

    method: 'render'.args: [task],

  })



  const taskMsg: { [key: string]: Function } = {

    partComplete(worker: Worker, task: RenderTask) {

      task.pixels.forEach((v) = > {

        const position = (v.x + v.y * task.width) * 4

        image.data[position] = v.r

        image.data[position + 1] = v.g

        image.data[position + 2] = v.b

        image.data[position + 3] = v.a

      })



      complete += task.pixels.length

      bar.style.width = (complete / mount) * 100 + The '%'ctx? .putImageData(image,0.0)},allComplete(worker: Worker, task: RenderTask | null) {

      if (task) {

        task.pixels.forEach((v) = > {

          const position = (v.x + v.y * task.width) * 4

          image.data[position] = v.r

          image.data[position + 1] = v.g

          image.data[position + 2] = v.b

          image.data[position + 3] = v.a

        })



        complete += task.pixels.length

        bar.style.width =

          (complete / mount > 1 ? 1 : complete / mount) * 100 + The '%'ctx? .putImageData(image,0.0)

      }



      worker.terminate()

    },

  }



  worker.onmessage = function (res: { data: { method: string; args: any[] } }) {

    const { method, args } = res.data

    if(taskMsg[method]) { taskMsg[method](worker, ... args) }else {

      alert(`app : can't find method (${method}) `)}}}Copy the code

Then in task.worker.ts we need to receive the message and perform the calculation.

const appMsg: { [key: string]: Function } = {

  render,

}



onmessage = function (e) {

  const { method, args = [] } = e.data



  if(appMsg[method]) { appMsg[method](... args) }else {

    console.log(`taskWorker: can't find method (${method}) `)}}Copy the code

The render function calculates each pixel separately and finally sends all the results back.

function render(task: RenderTask) {

  const { pixels, width, height } = task



  pixels.forEach((v, i) = >{ RenderPixel(v, width, height) }) ; (<any>postMessage)({method: 'allComplete'.args: [task],

  })

}
Copy the code

The renderPixel function is the focus of the following research, and now we use a simple function to take the place of first.

export default function RenderPixel(v: Px, width: number, height: number) {

    v.r = v.g = v.b = v.a = 255

}
Copy the code

👌 Now we have seen the effect, but the amount of data passed at one time is too large, if our scene is particularly complex, it will take a long time to get the result. So instead of sending the results back all at once, we send them back in batches. So the render function should look like this.

function render(task: RenderTask) {

  const { pixels, width, height } = task

  // -----------

  const len = 400



  let res = new RenderTask([], width, height)



  pixels.forEach((v, i) = > {

    RenderPixel(v, width, height)

    res.pixels.push(v)



    // Only 400 pixels are counted back

    if(res.pixels.length >= len) { ; (<any>postMessage)({method: 'partComplete'.args: [res],

      })



      res = newRenderTask([], width, height) } }) ; (<any>postMessage)({method: 'allComplete'.args: [res],

  })

  // -----------

}
Copy the code

In the actual rendering process, part of the pixel calculation is relatively small, and part is relatively large. This often results in a large gap between workers in working hours. So we need to randomly assign pixels to the worker. Change initTask to the following:

function initTasks(

  ctx: CanvasRenderingContext2D | null,

  width: number,

  height: number,

  amount: number,

) {

  const n = width * height

  const len = Math.ceil(n / amount)



  // Use a two-dimensional array to store the pixel

  const pixels: Px[][] = []

  // Build a two-dimensional array

  for (let y = 0; y < height; y++) {

    pixels.push([])

    for (let x = 0; x < width; x++) {

      pixels[y].push(new Px(x, y))

    }

  }



  let task = new RenderTask([], width, height)

  while (pixels.length) {

    // Pick a random y

    const y = Math.floor(Math.random() * (pixels.length - 0.0001))

    // Select a row

    const pxs = pixels[y]

    // Pick an x at random and not the last one

    const x = Math.floor(Math.random() * (pxs.length - 0.0001))

    // Select a pixel

    const px = pxs.splice(x, 1) [0]



    task.pixels.push(px)

    // Delete a line

    if (pxs.length == 0) pixels.splice(y, 1)



    if (task.pixels.length >= len || pixels.length == 0) {

      performTask(task, n)

      task = new RenderTask([], width, height)

    }

  }

}
Copy the code

Finally, we got the frame… And the rest of the stuff is basically what we talked about when we talked about principles

Convert coordinates to colors

We already have a renderPixel function that takes up position

export default function RenderPixel(v: Px, width: number, height: number) {

  v.r = v.g = v.b = v.a = 255

}
Copy the code

It can be seen that our main task is to calculate the color value of the current coordinate according to the pixel coordinate position, so a separate abstract color function is used to calculate the color, and the coordinate range is converted to [0,1] for convenience. (The reason why y is converted to 1-y here is that the positive direction of y axis obtained directly from Canvas is downward, while the coordinate system with positive direction of y axis is more familiar to us.)

function color(_x: number, _y: number) {

  const [x, y] = [ _x, 1 -  _y];

  return [x, y, 0.2];

}



export default function RenderPixel(v: Px, width: number, height: number) {

  [v.r, v.g, v.b, v.a] = [...color(v.x / width, v.y / height), 1].map((v) = >

    Math.floor(v * 255)); }Copy the code

Establish coordinate system Vec3

We’ve seen this before when we talked about the principles

export default class Vec3 {

  /** * the vector class Vec3, which can be used to express a point * coordinates/a direction * or express a color (RGB value) * in three-dimensional space@param e0

   * @param e1

   * @param e2* /

  constructor(public e0 = 0, public e1 = 0, public e2 = 0) {}

  // ---------- Static method start -----------

        / / add

  static add(v1: number | Vec3, v2: number | Vec3): Vec3 {

    return typeof v1 === 'number'

      ? Vec3.add(new Vec3(v1, v1, v1), v2)

      : typeof v2 === 'number'

      ? Vec3.add(v1, new Vec3(v2, v2, v2))

      : new Vec3(v1.e0 + v2.e0, v1.e1 + v2.e1, v1.e2 + v2.e2)

  }

        / / subtraction

  static sub(v1: number | Vec3, v2: number | Vec3): Vec3 {

    return typeof v1 === 'number'

      ? Vec3.sub(new Vec3(v1, v1, v1), v2)

      : typeof v2 === 'number'

      ? Vec3.sub(v1, new Vec3(v2, v2, v2))

      : new Vec3(v1.e0 - v2.e0, v1.e1 - v2.e1, v1.e2 - v2.e2)

  }

        / / the multiplication

  static mul(v1: number | Vec3, v2: number | Vec3): Vec3 {

    return typeof v1 === 'number'

      ? Vec3.mul(new Vec3(v1, v1, v1), v2)

      : typeof v2 === 'number'

      ? Vec3.mul(v1, new Vec3(v2, v2, v2))

      : new Vec3(v1.e0 * v2.e0, v1.e1 * v2.e1, v1.e2 * v2.e2)

  }

        / / division

  static div(v1: number | Vec3, v2: number | Vec3): Vec3 {

    return typeof v1 === 'number'

      ? Vec3.div(new Vec3(v1, v1, v1), v2)

      : typeof v2 === 'number'

      ? Vec3.div(v1, new Vec3(v2, v2, v2))

      : new Vec3(v1.e0 / v2.e0, v1.e1 / v2.e1, v1.e2 / v2.e2)

  }

        / / dot product

  static dot(v1: Vec3, v2: Vec3) {

    return v1.e0 * v2.e0 + v1.e1 * v2.e1 + v1.e2 * v2.e2

  }

  // ---------- Static method end -----------



  // ---------- will mount the method above the instance to start -----------

  add(v: Vec3 | number) {

    return Vec3.add(this, v)

  }



  sub(v: Vec3 | number) {

    return Vec3.sub(this, v)

  }



  mul(v: Vec3 | number) {

    return Vec3.mul(this, v)

  }



  div(v: Vec3 | number) {

    return Vec3.div(this, v)

  }

   // ---------- will mount the method on the instance end -----------



  // unit vector (vector direction)

  unitVec() {

    return this.div(this.length())

  }



  // Vector length

  squaredLength() {

    return this.e0 ** 2 + this.e1 ** 2 + this.e2 ** 2

  }



  length() {

    return this.squaredLength() ** (1 / 2)}}Copy the code

Set up the light

export default class Ray {

  /** * denotes light *@param Origin represents the coordinates of the starting position of the ray *@param * / direction direction

  constructor(public origin: Vec3, public direction: Vec3) {}



  /** * Calculates the position of the ray at the time parameter t *@param T Time parameter */

  getPoint(t: number) {

    // Corresponds to the original coordinates of the light ray plus the distance traveled by the light during that time

    return this.origin.add(this.direction.mul(t))

  }

}
Copy the code

Set up the camera

export default class Camera {

  / * * * *@param Origin coordinates: camera position *@param LeftBottom coordinates: the position of the screen foot *@param Horizontal vector: horizontal vector *@param Vertical vector: the vertical vector */

  constructor(public origin: Vec3, public leftBottom: Vec3, public horizontal: Vec3, public vertical: Vec3,) {}



  /** * get light information *@param X x star@param Y, y coordinate */

  getRay(x: number, y: number): Ray {

    return new Ray(

      this.origin,

      this.leftBottom

        .add(this.horizontal.mul(x))

        .add(this.vertical.mul(y))

        .sub(this.origin),

    )

  }

}
Copy the code

Generate the background

Modify the color function. Now that you have the camera and light, you can now generate a background based on the direction of the light.

const camera = new Camera(

  new Vec3(0.0.1), //origin

  new Vec3(-2, -1, -1), //leftBottom

  new Vec3(4.0.0), //horizontal

  new Vec3(0.2.0), //vertical

)



function color(_x: number, _y: number) {

  const [x, y] = [_x, 1 - _y]



  // Emit a ray of light from the camera

  const r = camera.getRay(x, y)



  // Set the background color

  // unitDirection unit vector

  const unitDirection = r.direction.unitVec(),

    // Compute a correlation number for y coordinates

    t = (unitDirection.e1 + 1.0) * 0.5



  // Calculate a gradient color value

  // Add the two together

  const res = Vec3.add(

    // (1, 1, 1) * (1 - t)

    new Vec3(1.1.1).mul(1 - t),

    // (0.3, 0.5, 1) * t

    new Vec3(0.3.0.5.1).mul(t),

  )



  return [res.e0, res.e1, res.e2]

}
Copy the code

Add a ball

Before adding a ball, we need to write an interface, because in our world it is assumed that the ball and a set of balls (all objects in the scene) can be hit by light.

So we need to abstract out a HitableInterface that contains a hit method that returns a HitRecord based on the ray r and time parameter range Tmin /t_max.

A HitRecord contains the time parameter T and the coordinate P where the collision occurred and the normal direction at the time of the collision, normal.

// hitRecord.ts



export default class HitRecord {

  /** * Collision record *@param T time parameter t *@param P the coordinate where the collision happened, p star@param Normal Normal */ when the collision occurs

  constructor(

    public t: number = 0,

    public p: Vec3 = new Vec3(0.0.0),

    public normal: Vec3 = new Vec3(0.0.0)

  ){}}// hitable.interface.ts

export default interface HitableInterface {

  /** * interacts with light *@param Ray of light *@param T_min Time range *@param T_max Time range */

  hit: (ray: Ray, t_min: number, t_max: number,) = > HitRecord | undefined

}



type HitResult = ReturnType<HitableInterface['hit'] >export type { HitResult }
Copy the code

Then we will implement a sphere where the algorithm is described in the principles section

export default class Sphere implements HitableInterface {

  /** * Ball (required information, ball center and radius) *@param Center coordinates of the sphere *@param The radius radius * /

  constructor(public center: Vec3, public radius: number) {}



  hit(ray: Ray, t_min: number, t_max: number) {

    let hit = new HitRecord()



    const oc = Vec3.sub(ray.origin, this.center)

    const a = Vec3.dot(ray.direction, ray.direction)

    const b = Vec3.dot(oc, ray.direction) * 2

    const c = Vec3.dot(oc, oc) - this.radius ** 2



    const discriminate = b ** 2 - 4 * a * c



    if (discriminate > 0) {

      let temp

      temp = (-b - Math.sqrt(discriminate)) / (2 * a)

      if (temp > t_min && temp < t_max) {

        hit.t = temp

        hit.p = ray.getPoint(temp)

        hit.normal = hit.p.sub(this.center).div(this.radius)



        return hit

      }

      temp = (-b + Math.sqrt(discriminate)) / (2 * a)

      if (temp > t_min && temp < t_max) {

        hit.t = temp

        hit.p = ray.getPoint(temp)

        hit.normal = hit.p.sub(this.center).div(this.radius)



        return hit

      }

    }

  }

}
Copy the code

Add the ball to the scene

Create a new Sphere new Sphere(New Vec3(0, 0, -1), 0.5). Modify the color function to return (0,1,0) if the ray collides with the ball, or the background color otherwise.

// ...

const ball = new Sphere(new Vec3(0.0, -1), 0.5)



function color(_x: number, _y: number) {

  const [x, y] = [ _x, 1 -  _y];



   // Emit a ray of light from the camera

  const r = camera.getRay(x, y);

  const hit = ball.hit(r, 0.Infinity)

  let res: Vec3

  if (hit) {

    res = new Vec3(0.1.0)}else {

    // Set the background color

    const unitDirection = r.direction.unitVec(),

      t = (unitDirection.e1 + 1.0) * 0.5



    res = Vec3.add(new Vec3(1.1.1).mul(1 - t), new Vec3(0.3.0.5.1).mul(t))

  }

  return [res.e0, res.e1, res.e2]

}
Copy the code

Create a scene group

Remember that HitableInterface we created earlier? The interface was created because the ball and a set of balls (all objects in the scene) are capable of being hit by light. We’ve implemented the ball, and then we need to implement this set. That’s the HitList class, which also implements the HitableInterface. It instantiates the object that can be hit in the scene by passing in the HitableInterface[], and its hit method calls the hit methods of all internal hitableInterfaces separately, choosing the smallest result to pass out.

export default class HitList implements HitableInterface {

  // Objects in this group

  list: HitableInterface[]



  constructor(. arg: HitableInterface[]) {

    this.list = arg

  }



         // Call all internal HitableInterface's hit methods separately, choosing the smallest result to pass out

  hit(ray: Ray, t_min: number, t_max: number) {

    let closest_t = t_max,

      hit: HitResult = undefined



    this.list.forEach((v) = > {

      let _hit = v.hit(ray, t_min, t_max)

      if (_hit && _hit.t < closest_t) {

        hit = _hit

        closest_t = _hit.t

      }

    })

    return hit

  }

}
Copy the code

Add multiple balls

// renderPixel.ts

/ /...

// const ball = new Sphere(new Vec3(0, 0, -1), 0.5)

const ball1 = new Sphere(new Vec3(0.0, -1), 0.5)

const ball2 = new Sphere(new Vec3(1.0, -1), 0.5)

const ball3 = new Sphere(new Vec3(-1.0, -1), 0.5)

const earth = new Sphere(new Vec3(0, -100.5, -1), 100)



const world = new HitList(ball1, ball2, ball3, earth)



// ...





function color(_x: number, _y: number) {

        // ...



  // const hit = ball.hit(r, 0, Infinity)

  const hit = world.hit(r, 0.Infinity)

        // ...

}
Copy the code

The reflection of light

Because we have saved the normal direction of this collision in HitRecord, the direction of reflected light can be calculated according to the direction of incident light and normal. You find the component vector of the incident ray along the normal line, and then you subtract twice the value of the component vector from the incident ray and you get the reflected ray

export default class Ray {

  / /...



  reflect(hit: HitRecord) {

    return new Ray(hit.p, reflect(this.direction.unitVec(), hit.normal))

  }

}



// Calculate the reflected light

// First calculate the component vector of the incident ray along the normal direction, then subtract two times the value of the component vector from the incident ray to get the reflected light direction

function reflect(v: Vec3, n: Vec3) {

  return v.sub(n.mul(Vec3.dot(v, n) * 2))}Copy the code

In this case, we need to modify every place where there is a HitRecord, and add the reflected Ray. We don’t put the reflected Ray in the HitRecord, as you can see from the parameter of the Reflect method in Ray, the reflected Ray is calculated by the HitRecord. So let’s get started.

  • The first is the HitableInterface, because that’s where the implementation of the hit method is defined
// hit: (ray: Ray, t_min: number, t_max: number) => HitRecord | undefined

hit: (ray: Ray, t_min: number, t_max: number) = > readonly [HitRecord, Ray] | undefined
Copy the code
  • And then there are the two implementations

    • Ball (sphere. Ts)
// return hit

return [hit, ray.reflect(hit)] as const
Copy the code
  • Scenario group (hitList. Ts)
// _hit.t

_hit[0].t
Copy the code

Ray tracing

And finally, finally, now that WE have the information about the incoming and outgoing light rays at each point, I can know the path of the entire light rays as long as I have a starting point, and then we can track the path.

I’m going to separate out the part of the color function that gets the color by light

function color(_x: number, _y: number) {

    const [x, y] = [_x, 1 - _y]



    const r = camera.getRay(x, y)



    const res = trace(world, r)



    return [res.e0, res.e1, res.e2]

}
Copy the code

In the trace function we get the light reflected in the hit result, and we put in the trace recursion. To prevent infinite recursion, the step parameter is added to trace, and if the number of reflections exceeds 50, it returns black directly.

function trace(sence: HitableInterface, r: Ray, step = 0) :Vec3 {

  if (step > 50) return new Vec3(0.0.0)



  const hit = sence.hit(r, 0.0000001.Infinity)



  let res: Vec3



  if (hit) {

    res = trace(sence, hit[1], ++step).mul(0.5)}else {

    // Set the background color

    const unitDirection = r.direction.unitVec(),

      t = (unitDirection.e1 + 1.0) * 0.5



    res = Vec3.add(new Vec3(1.1.1).mul(1 - t), new Vec3(0.3.0.5.1).mul(t))

  }

  return res

}
Copy the code

We’ve implemented a simple path tracking thing, but there’s a lot of room for improvement in just 18 lines of code above

  • Traversal in the scene when the object is to all the objects to calculate again whether they intersect, if there is an object that is impossible to see behind you, but we will they are calculated, there is a kind of optimization method that surrounded the space into multiple box, only the light into a bounding box calculation of the object.
  • When we deal with the attenuation of light reflection, we treat all wavelengths the same way, so there’s no color, no texture, nothing like that.
  • Handling of the end of the cycle conditions we also is very rough, we assume that the light ejection 50 times just think it is black, actually can use more random way, but this time leads to another problem, because we each bounce the number of pixels, there will be a lot of noise, we need through some method to remove noise.

Although there are many drawbacks, the above content sets up a ray-tracing framework.

When I was writing before, I felt that this content seemed to have nothing to do with the front end. Although some front-end technologies were used, such as Canvas and Web worker, there were other ways to implement it in other environment voice. This is a far cry from our usual 2D front-end rendering, or even WebGL. But I feel that no matter how to write the program is algorithm plus data structure, optimization method also does not calculate the invisible things, how to use the way of class to describe the object in reality and so on. So give me not just ray tracing, but some programming ideas.

reference

Sites.cs.ucsb.edu/~lingqi/tea…

Raytracing. Making. IO/books/RayTr…

zhuanlan.zhihu.com/p/42218384

❤️ Thank you

That is all the content of this sharing. I hope it will help you

Don’t forget to share, like and bookmark your favorite things.

Welcome to pay attention to the public number ELab team receiving factory good article ~

We are from the front end department of Bytedance, responsible for the front end development of all bytedance education products.

We focus on product quality improvement, development efficiency, creativity and cutting-edge technology and other aspects of precipitation and dissemination of professional knowledge and cases, to contribute experience value to the industry. Including but not limited to performance monitoring, component library, multi-terminal technology, Serverless, visual construction, audio and video, artificial intelligence, product design and marketing, etc.

You are welcome to post in the comments section at 🤪