This is an article about the application of GLSL in web dynamic interaction. The quality of the article is relatively high, and the translation will be used for reference.

Original link: tympanus.net/codrops/201…

Learn how to use noise to create sticky hover effects in shaders.

View the online demo or download the source code

WebGL, the alternative to Flash, has become increasingly popular in recent years with libraries like Three.js, pixi. js, and ogl.js. They are very useful for creating blank boards where the only limit is your imagination. More and more, we’re seeing WebGL create effects that are subtly integrated into the interface to hover, scroll, or display effects. Like Hello Monday or Cobosrl.co.

In this tutorial, we will use three.js to create a special viscous texture that will be used to display another image while hovering. You can click on the demo link right now to see it in action! For the demo itself, I created a more realistic example that shows a horizontal scrollable layout with images, where each image has a different effect. You can click on the image and it will transform to a larger version while displaying some other content (Mock out content). We’ll take you through the most interesting parts of the effect so you can see how it works and create more effects yourself!

I assume you have some familiarity with Javascript, three.js, and shaders. If you don’t know, take a look at Three.js documentation, The Book of Shaders, three. js Fundamentals, or Discover three. js.

** Note :** This tutorial covers many sections. If you wish, you can skip the HTML/CSS/JavaScript section and go straight to the shader section.

Create scenes in the DOM

Before we can create anything interesting, we need to insert images into our HTML. Setting the initial location and size in HTML/CSS makes it easier to handle the scene size than locating everything in JavaScript. Also, the style section should only be defined in CSS, not Javascript. For example, if our image is 16:9 on the desktop and 4:3 on a mobile device, we should only use CSS. JavaScript will only be used to request updated data.

// index.html

<section class="container">
	<article class="tile">
		<figure class="tile__figure">
			<img data-src="path/to/my/image.jpg" data-hover="path/to/my/hover-image.jpg" class="tile__image" alt="My image" width="400" height="300" />
		</figure>
	</article>
</section>

<canvas id="stage"></canvas>
Copy the code
// style.css

.container {
	display: flex;
	align-items: center;
	justify-content: center;
	width: 100%;
	height: 100vh;
	z-index: 10;
}

.tile {
	width: 35vw;
	flex: 0 0 auto;
}

.tile__image {
	width: 100%;
	height: 100%;
	object-fit: cover;
	object-position: center;
}

canvas {
	position: fixed;
	left: 0;
	top: 0;
	width: 100%;
	height: 100vh;
	z-index: 9;
}
Copy the code

As you can see above, we have created an image in the center of the screen. Later we will use the data-src and data-hover attributes to lazily load both images in our script.

Create scenes in JavaScript

Let’s start with the never easy but not hard part! First, we’ll create the scene, lights and renderers.

// Scene.js

import * as THREE from 'three'

export default class Scene {
	constructor() {
		this.container = document.getElementById('stage')

		this.scene = new THREE.Scene()
		this.renderer = new THREE.WebGLRenderer({
			canvas: this.container,
			alpha: true,})this.renderer.setSize(window.innerWidth, window.innerHeight)
		this.renderer.setPixelRatio(window.devicePixelRatio)

		this.initLights()
	}

	initLights() {
		const ambientlight = new THREE.AmbientLight(0xffffff.2)
		this.scene.add(ambientlight)
	}
}
Copy the code

This is a very basic scenario. But we need one more basic element in the scene: the camera. We have two kinds of cameras to choose from: ortho or perspective. If we want the image to keep its shape, we can choose the first option. But for rotation, we want some perspective as we move the mouse.

In Three.js with perspective cameras (or any other library for WebGL), the value of 10 units on the screen does not equal 10px. So the trick here is to use some math to convert 1 unit to 1px and change the perspective to increase or decrease the distortion effect.

// Scene.js

const perspective = 800

constructor() {
	// ...
	this.initCamera()
}

initCamera() {
	const fov = (180 * (2 * Math.atan(window.innerHeight / 2 / perspective))) / Math.PI

	this.camera = new THREE.PerspectiveCamera(fov, window.innerWidth / window.innerHeight, 1.1000)
	this.camera.position.set(0.0, perspective)
}
Copy the code

We set the perspective value to 800 so that it doesn’t distort too much as we rotate the plane. The more perspective we add, the less distortion we perceive, and vice versa. Then, the last thing we need to do is render the scene in each frame.

// Scene.js

constructor() {
	// ...
	this.update()
}

update() {
	requestAnimationFrame(this.update.bind(this))
	
	this.renderer.render(this.scene, this.camera)
}
Copy the code

If your screen isn’t black, you’re on the right track!

Create the plane with the correct dimensions

As mentioned above, we have to retrieve some additional information from the image in the DOM, such as its size and position on the page.

// Scene.js

import Figure from './Figure'

constructor() {
	// ...
	this.figure = new Figure(this.scene)
}
Copy the code
// Figure.js

export default class Figure {
	constructor(scene) {
		this.$image = document.querySelector('.tile__image')
		this.scene = scene

		this.loader = new THREE.TextureLoader()

		this.image = this.loader.load(this.$image.dataset.src)
		this.hoverImage = this.loader.load(this.$image.dataset.hover)
		this.sizes = new THREE.Vector2(0.0)
		this.offset = new THREE.Vector2(0.0)

		this.getSizes()

		this.createMesh()
	}
}
Copy the code

First, we create another class to which we pass the scene as an attribute. We set up two new vectors, size and offset, to store the size and position of the DOM image.

In addition, we will use TextureLoader to “load” the image and convert it to a texture. We need to do this because we want to use these images in our shaders.

We need to create a method in our class that handles the loading of the image and waits for the callback. We could do this using asynchronous functionality, but for this tutorial we’ll keep it simple. Keep in mind that you may need to refactor it a bit for your own purposes.

// Figure.js

// ...
	getSizes() {
		const { width, height, top, left } = this.$image.getBoundingClientRect()

		this.sizes.set(width, height)
		this.offset.set(left - window.innerWidth / 2 + width / 2, -(top - window.innerHeight / 2 + height / 2))}// ...
Copy the code

We get the image information in the getBoundingClientRect object. They are then passed to two variables. The offset here is used to calculate the distance between the center of the screen and the object on the page. (Translator)

// Figure.js

// ...
	createMesh() {
		this.geometry = new THREE.PlaneBufferGeometry(1.1.1.1)
		this.material = new THREE.MeshBasicMaterial({
			map: this.image
		})

		this.mesh = new THREE.Mesh(this.geometry, this.material)

		this.mesh.position.set(this.offset.x, this.offset.y, 0)
		this.mesh.scale.set(this.sizes.x, this.sizes.y, 1)

		this.scene.add(this.mesh)
	}
// ...
Copy the code

After that, we will set the values on the plane. As you can see, we have created a plane at 1px with 1 row and 1 column. Since we don’t want to distort the plane, we don’t need many faces or vertices. So let’s keep it simple.

Why do we scale the grid when we can set it directly?

In fact, this is mainly to make it easier to adjust the size of the grid. If we want to change the size of the grid later, there is no better way to do it than using scale. While changing the scale of the grid is easier to implement directly, it is not as convenient to use for resizing. (This is a clever trick: just set the original size to 1×1 and use the Scaling API to convert the mesh to its actual size, so that the scaling ratio is equal to the actual length and width.)

So far, we have the MeshBasicMaterial set up, and everything looks fine.

Get mouse coordinates

Now that we have built the scene using the grid, we want to get the mouse coordinates, and to keep things simple, we normalized them. Why normalization? Look at the coordinate system of the shader.

As shown in the figure above, we have normalized the values of the two shaders. For simplicity, we will convert the mouse coordinates to match the vertex shader coordinates.

If you’re having trouble understanding the Fundamentals here, I suggest you take a look at the various chapters in the Book of Shaders and Three.js Fundamentals. Both have good advice and plenty of examples to help you understand.

// Figure.js

// ...

this.mouse = new THREE.Vector2(0.0)
window.addEventListener('mousemove', (ev) => { this.onMouseMove(ev) })

// ...

onMouseMove(event) {
	TweenMax.to(this.mouse, 0.5, {
		x: (event.clientX / window.innerWidth) * 2 - 1.y: -(event.clientY / window.innerHeight) * 2 + 1,
	})

	TweenMax.to(this.mesh.rotation, 0.5, {
		x: -this.mouse.y * 0.3.y: this.mouse.x * (Math.PI / 6)})}Copy the code

For the tween part, I will use GreenSock’s TweenMax. This is the best library ever. And it’s perfect for what we’re trying to do. We don’t have to deal with transitions between the two states, TweenMax does it for us. TweenMax smoothly updates position coordinates and rotation angles every time you move the mouse.

One more thing before we proceed to the next step: We update the material from MeshBasicMaterial to ShaderMaterial, passing in some values (uniform values) and shader code.

// Figure.js

// ...

this.uniforms = {
	u_image: { type: 't'.value: this.image },
	u_imagehover: { type: 't'.value: this.hover },
	u_mouse: { value: this.mouse },
	u_time: { value: 0 },
	u_res: { value: new THREE.Vector2(window.innerWidth, window.innerHeight) }
}

this.material = new THREE.ShaderMaterial({
	uniforms: this.uniforms,
	vertexShader: vertexShader,
	fragmentShader: fragmentShader
})

update() {
	this.uniforms.u_time.value += 0.01
}
Copy the code

We pass in two textures, along with the mouse position, screen size, and a variable called u_time that increments with each frame.

But remember, this is not the best approach. We only need to increment when we hover over the graph, not every frame. For performance reasons, it is best to update shaders only when needed.

The principle behind the technique and how to use noise

I won’t explain what noise is or where it comes from. If you’re interested, explore The relevant chapter in The Shader of Shaders, which explains it well.

To make a long story short, noise is a function that gives us a value between -1 and 1 based on the values passed. It outputs random but related values.

Thanks to noise, we can generate many different shapes, such as maps, random patterns, etc.

Let’s start with 2D noise. We can get a cloud-like texture just by passing the coordinates of the texture.

But there are actually several kinds of noise functions. We use 3D noise and give a parameter such as… Time? The noise pattern will change over time. By changing the frequency and amplitude, we can make some changes and increase the contrast.

Next, we will create a circle. It’s easy to build simple shapes like circles in fragment shaders. We just took The features from The Shader of Shaders: Shapes to create a blurry circle, adding contrast and visual effects!

Finally, we add these two together and, using some variables, let it “slice” the texture:

Isn’t the result exciting? Let’s dive into the code!

shader

We don’t really need a vertex shader here, here’s our code:

 // vertexShader.glsl
varying vec2 v_uv;

void main() {
	v_uv = uv;

	gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

Copy the code

The ShaderMaterial of three.js provides some useful default variables for beginners:

  • Position (VEC3) : Coordinates of each vertex of the grid
  • Uv (VEC2) : Texture coordinates
  • Normal (VEC3) : The normal of each vertex in the grid.

Here, we simply pass the UV coordinates from the vertex shader to the fragment shader.

Create a circle

Let’s use The function in The Book of Shaders to build The circle and add a variable to control The ambiguity of The edges.

In addition, we will use mouse position to synchronize the center coordinates. This way, as soon as we move the mouse over the image, the circle moves with the mouse.

// fragmentShader.glsl
uniform vec2 u_mouse;
uniform vec2 u_res;

float circle(in vec2 _st, in float _radius, in float blurriness){
	vec2 dist = _st;
	return 1.-smoothstep(_radius-(_radius*blurriness), _radius+(_radius*blurriness), dot(dist,dist)*4.0);
}

void main() {
	vec2 st = gl_FragCoord.xy / u_res.xy - vec2(1.);
	// tip: use the following formula to keep the good ratio of your coordinates
	st.y *= u_res.y / u_res.x;

	vec2 mouse = u_mouse;
	// tip2: do the same for your mouse
	mouse.y *= u_res.y / u_res.x;
	mouse *= 1.;
	
	vec2 circlePos = st + mouse;
	float c = circle(circlePos, .03.2.);

	gl_FragColor = vec4(vec3(c), 1.);
}

Copy the code

Create some noise noise noise sound

As we saw above, the noise function takes multiple parameters and generates realistic cloud patterns for us. So how do we get this?

For this section, I’ll use Glslify and GLSL-Noise, as well as two NPM packages to include the other features. It makes our shaders more readable and hides a lot of display functions that we wouldn’t even use.

// fragmentShader.glsl
#pragma glslify: snoise2 = require('glsl-noise/simplex/2d')

/ /...

varying vec2 v_uv;

uniform float u_time;

void main() {
	// ...

	float n = snoise2(vec2(v_uv.x, v_uv.y));

	gl_FragColor = vec4(vec3(n), 1.);
}

Copy the code

By changing the magnitude and frequency of the noise (such as sin/cos functions), we can change the rendering.

// fragmentShader.glsl

float offx = v_uv.x + sin(v_uv.y + u_time * 1.);
float offy = v_uv.y - u_time * 0.1 - cos(u_time * 001.) * .01;

float n = snoise2(vec2(offx, offy) * 5.) * 1.;

Copy the code

But that’s not a function of time! It’s distorted. We want it to look good. Therefore, we will use Noise3D instead and pass the third parameter: time.

float n = snoise3(vec3(offx, offy, u_time * 1.) * 4.) * . 5;

Copy the code

Merge the texture

Just by stacking them together, we can see interesting shapes that change over time.

To explain the principle behind this, let’s assume that noise is like a value that floats between -1 and 1. But our screen can’t show pixels that are negative or greater than 1 (pure white), so we can only see values between 0 and 1.

Our circle looks like this:

Approximate result after addition:

Our very white pixels are pixels outside the visible spectrum.

If we reduce the noise and subtract a small amount, it will gradually move down the wave until it disappears within the range of visible colors.

float n = snoise(vec3(offx, offy, u_time * 1.) * 4.) - 1.;

Copy the code

Our circle is still there, just less visible. If we increase the value of times it, it will make even more contrast.

float c = circle(circlePos, 0.3.0.3) * 2.5;

Copy the code

We’ll have what we want most! But as you can see, some details are still missing. And our edges are not sharp at all.

To solve this problem, we’ll use the built-in Smoothstep Function.

float finalMask = smoothstep(0.4.0.5, n + c);

gl_FragColor = vec4(vec3(finalMask), 1.);
Copy the code

With this feature, we will cut out a portion of the pattern between 0.4 and 0.5. The shorter the interval between these values, the sharper the edge.

Finally, we can mix the two textures as masks.

uniform sampler2D u_image;
uniform sampler2D u_imagehover;

// ...

vec4 image = texture2D(u_image, uv);
vec4 hover = texture2D(u_imagehover, uv);

vec4 finalImage = mix(image, hover, finalMask);

gl_FragColor = finalImage;
Copy the code

We can change a few variables to produce a stronger stickiness effect:

// ...

float c = circle(circlePos, 0.3.2.) * 2.5;

float n = snoise3(vec3(offx, offy, u_time * 1.) * 8.) - 1.;

float finalMask = smoothstep(0.4.0.5, n + pow(c, 2.));

// ...
Copy the code

The full source code can be found here

The last

I’m glad you read this. This tutorial isn’t perfect, and I may have overlooked some details, but I hope you still enjoyed it. From there, feel free to use as many variables as you want, experiment with other noise functions, and use your imagination with mouse direction or scrolling to achieve other effects!

Reference and thanks

  • Images from Unsplash
  • Three.js
  • GSAP from GreenSock
  • Smooth Scrollbar
  • glslify
  • glsl-noise