directory

  1. The implementation principle of skybox
  2. Concrete code implementation
  3. data
  4. harvest

Results the following

Today, we learn to practice sky box. The technology of sky box itself is relatively simple, but it can make a lot of effects such as comparing the sky, mountains, sea and VR house viewing. It can move dynamically as the background, or follow gestures or sensors to move and transform.

Cube map and sky box

A skybox is a cube that is unrolled and plastered with maps on six sides

The effect of the sky box is just like the effect shown in the opening animation. From one point of view, the rotating view of the sky shows different scenes. We can imagine ourselves at the inner center of a three-dimensional space, surrounded by a large cube, consisting of six planes: up and down, left and right, front and back. We rotate our perspective to see different pictures.

So we can apply the above principle to a cube mapping

In the actual rendering, the cube is always placed around the camera so that the camera is always in the center of the cube, and the coordinates of the line of sight intersection with the cube are used to determine which surface to sample the texture on. The specific mapping method is: set the intersection point between the line of sight and the cube as (x,y,z)(x,y,z), take the component with the largest absolute value in x,y, zx,y,z, and determine which plane to sample according to its sign.

Then divide the other two components by the absolute value of the largest component, so that the other two components are mapped to [0,1]. Then, texture mapping can be done directly on the corresponding texture. This method is called Cube Map, which is the core of skybox method

Cube mapping is the same process as 2D texture creation

      GLES20.glActiveTexture(GLES20.GL_TEXTURE0)
        GLES20.glBindTexture(GLES20.GL_TEXTURE_CUBE_MAP, skyBoxTexture)
        GLES20.glUniform1i(uTextureLoc, 0)
Copy the code

The cube texture map is loaded as follows

Public static int loadCubeMap(context.context.loadCubemap); int[] cubeResources) { final int[] textureObjectIds = new int[1]; glGenTextures(1, textureObjectIds, 0); if (textureObjectIds[0] == 0) { Log.w(TAG, "Could not generate a new OpenGL texture object."); return 0; } final BitmapFactory.Options options = new BitmapFactory.Options(); options.inScaled = false; final Bitmap[] cubeBitmaps = new Bitmap[6]; for (int i = 0; i < 6; i++) { cubeBitmaps[i] = BitmapFactory.decodeResource(context.getResources(), cubeResources[i], options); if (cubeBitmaps[i] == null) { Log.w(TAG, "Resource ID " + cubeResources[i] + " could not be decoded."); glDeleteTextures(1, textureObjectIds, 0); return 0; }} // Linear filtering for minification and magnification // Note that gl_texture_map is not GL_TEXTURE_2D, GlBindTexture (GL_TEXTURE_CUBE_MAP, textureObjectIds[0]); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); TexImage2D (GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, cubeBitmaps[0], 0); texImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, cubeBitmaps[1], 0); texImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, cubeBitmaps[2], 0); texImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, cubeBitmaps[3], 0); texImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, cubeBitmaps[4], 0); texImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, cubeBitmaps[5], 0); glBindTexture(GL_TEXTURE_CUBE_MAP, 0); For (bitmap bitmap: cubeBitmaps) {bitmap.recycle(); } return textureObjectIds[0]; }Copy the code

OpenGL provides us with six special texture targets that correspond specifically to one face of the cube map.

GL_TEXTURE_CUBE_MAP_POSITIVE_X right GL_TEXTURE_CUBE_MAP_NEGATIVE_X left GL_TEXTURE_CUBE_MAP_POSITIVE_Y GL_TEXTURE_CUBE_MAP_NEGATIVE_Y after GL_TEXTURE_CUBE_MAP_POSITIVE_Z before GL_TEXTURE_CUBE_MAP_NEGATIVE_Z

Also use cube textures on shaders

// Use cube texture Uniform samplerCube uTexture; varying vec3 vPosition; void main() { gl_FragColor = textureCube(uTexture,vPosition); }Copy the code

Two, specific code implementation

Through the above section, we know that the implementation principle of sky box is relatively simple. Now we begin to implement the specific code.

First, write the shader code

uniform mat4 uMatrix; attribute vec3 aPosition; varying vec3 vPosition; void main() { vPosition = aPosition; Gl_Position = uMatrix * vec4 (aPosition, 1.0); // note that gl_Position = gl_position.xyww; }Copy the code

Z = w after the projection transformation, a perspective division is performed, that is, all components of the quaternion vector are divided by its w component, so that x and y of the region within the visual cone are mapped to [−1,1][−1,1], z to [0,1][0,1], Therefore, the invisible vertices can be directly removed according to the range of x, y, zx, y and z after perspective division. If z=wz=w, it means z=1z=1 after perspective division, which means that the sky box is always in the far plane position

// Use cube texture Uniform samplerCube uTexture; varying vec3 vPosition; void main() { gl_FragColor = textureCube(uTexture,vPosition); }Copy the code

Next, let’s focus on the implementation of Render

package com.av.mediajourney.skybox import android.content.Context import android.opengl.GLES20 import android.opengl.GLSurfaceView import android.opengl.Matrix import com.av.mediajourney.R import com.av.mediajourney.opengl.ShaderHelper import com.av.mediajourney.particles.android.util.TextureHelper import javax.microedition.khronos.egl.EGLConfig import javax.microedition.khronos.opengles.GL10 class SkyBoxRender(var context:  Context) : GLSurfaceView.Renderer { lateinit var skyBox: SkyBox; var mProgram = -1 private val projectionMatrix = FloatArray(16) private val viewMatrix = FloatArray(16) private val viewProjectionMatrix = FloatArray(16) private var aPositionLoc = -1; private var uMatrixLoc = -1; private var uTextureLoc = -1; private var skyBoxTexture = -1; override fun onSurfaceCreated(gl: GL10? , config: EGLConfig?) { GLES20.glClearColor(0f, 0f, 0f, 1f) skyBox = SkyBox() val vertexStr = ShaderHelper.loadAsset(context.resources, "sky_box_vertex.glsl") val fragStr = ShaderHelper.loadAsset(context.resources, "sky_box_fragment.glsl") mProgram = ShaderHelper.loadProgram(vertexStr, fragStr) aPositionLoc = GLES20.glGetAttribLocation(mProgram, "aPosition") uMatrixLoc = GLES20.glGetUniformLocation(mProgram, "uMatrix") uTextureLoc = GLES20.glGetUniformLocation(mProgram, "uTexture") skyBoxTexture = TextureHelper.loadCubeMap(context, intArrayOf(R.drawable.left2, R.drawable.right2, R.drawable.bottom2, R.drawable.top2, R.drawable.front2, R.drawable.back2)) } override fun onSurfaceChanged(gl: GL10?, width: Int, height: Int) { GLES20.glViewport(0, 0, width, Height) val whRadio = width/(height * 1.0f) Matrix. 0) Matrix.perspectiveM(projectionMatrix, 0, 105f, whRadio, 1f, 10f) } var frameIndex: Int = 0 override fun onDrawFrame(gl: GL10?) { GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT) GLES20.glClearColor(0f, 0f, 0f, Val xRotationT = xRotationAuto + xrotationRotation val xRotationAuto +xRotation FrameIndex++ Matrix. SetIdentityM (viewMatrix,0) Rotation,0f,0f) RotateM (viewMatrix, 0, xRotationT, 0f, 1f, 0f) // Matrix. RotateM (viewMatrix, 0, xRotationT, 0f, 1f, 0f) 0f) Matrix.multiplyMM(viewProjectionMatrix, 0, projectionMatrix, 0, viewMatrix, 0) GLES20. GlUseProgram (mProgram). / / the MVP matrix data GLES20 glUniformMatrix4fv (uMatrixLoc, 1, false, viewProjectionMatrix, Gles20. glActiveTexture(gles20. GL_TEXTURE0) gles20. glBindTexture(gles20. GL_TEXTURE_CUBE_MAP) skyBoxTexture) GLES20.glUniform1i(uTextureLoc, 0) GLES20.glEnableVertexAttribArray(aPositionLoc) skyBox.vertexArrayBuffer.position(0); GLES20.glVertexAttribPointer(aPositionLoc, SkyBox.POSITION_COMPONENT_COUNT, GLES20.GL_FLOAT, false, 0, skyBox.vertexArrayBuffer) GLES20.glDrawElements(GLES20.GL_TRIANGLES, 36, GLES20.GL_UNSIGNED_BYTE, skyBox.indexArrayBuffer) } private var xRotation = 0f private var yRotation = 0f fun handleTouchMove(deltaX: Float, deltaY: Float) { xRotation += deltaX / 16f yRotation += deltaY / 16f if (yRotation < -90f) { yRotation = -90f } else if (yRotation > 90) { yRotation = 90f } } }Copy the code

See the code comments for details on the flow and logic. Explain why here with the method of rotating, rather than the displacement of the way to switch from the point of view, because we are not in a plane, but lies in the center of a cube, along a direction (Y) to select, the sky mobile effect can be realized, if with the method of displacement is the movement of the cube. The comparison effect is as follows:

GlSurfaceView. QueueEvent to render to refresh the size of the rotation, you can follow the effect of gesture rotation accordingly

glSurfaceView.setOnTouchListener(object : OnTouchListener { var lastX = 0f; var lastY = 0f; override fun onTouch(v: View? , event: MotionEvent?) : Boolean { if (event == null) { return false } if (MotionEvent.ACTION_DOWN == event.action) { lastX = event.x; lastY = event.y; } else if (MotionEvent.ACTION_MOVE == event.action) { val deltaX = event.x - lastX val deltaY = event.y - lastY lastX = event.x lastY = event.y glSurfaceView.queueEvent { skyBoxRender.handleTouchMove(deltaX, deltaY) } } return true } })Copy the code

For details, see github github.com/ayyb1988/me…

Third, information

  1. Implementation principle and details of SkyBox
  2. NDK OpenGL ES 3.0: Cube mapping (Skybox)
  3. Cube map
  4. Use of OpenGL graphics library (26) – Advanced OpenGL Cubemaps
  5. The OpengL rendering pipeline couldn’t be more detailed

Four, harvest

  1. Understand how skybox works
  2. The implementation of cube mapping
  3. Concrete code implementation

Thank you for reading to make the rendering content more realistic, the application of reflection, refraction and so on is essential. Next we enter the lighting part of the learning practice, welcome to pay attention to the public account “audio and video development journey”, learn and grow together. Welcome to communicate