This article was first published on the wechat official account Byteflow

FFmpeg development series serial:

FFmpeg Development (01) : FFmpeg compilation and integration

FFmpeg development (02) : FFmpeg + ANativeWindow video decoding playback

FFmpeg development (03) : FFmpeg + OpenSLES audio decoding playback

FFmpeg development (04) : FFmpeg + OpenGLES audio visual playback

FFmpeg development (05) : FFmpeg + OpenGLES video decoding playback and video filter

FFmpeg development (06) : FFmpeg player to achieve audio and video synchronization in three ways

In the previous article, we have implemented a multimedia player using FFmpeg + OpenGLES + OpenSLES. This article will implement a cool 3D panorama player based on this player.

Principle of Panorama Player

Panoramic video is shot by multiple cameras in one position in all directions at the same time, and finally generated by post-stitching.

Playing a panoramic video with an ordinary multimedia player will cause serious stretching and distortion.

Panorama player renders the video picture on the sphere, which is equivalent to observing the inner sphere from the center of the sphere, and the observed picture has no dead Angle of 360 degrees. This is the realization principle of most “VR boxes” on the market.

Constructing spherical mesh

The essential difference between the principle of Panorama player and ordinary player lies in the part of rendering image. Ordinary player only needs to render the video image to a rectangular plane, while Panorama player needs to render the video image to a sphere.

To implement panorama player, we just need to build a sphere using OpenGL and render the video decoded by FFmpeg onto the surface of the sphere.

All 3D objects in OpenGL ES are composed of triangles. To construct a sphere, we only need to calculate the three-dimensional coordinates of spherical points by using the longitude Angle, dimension Angle and radius in the spherical coordinate system. Finally, these coordinate points form small rectangles, and each rectangle can be divided into two triangles.

In the spherical coordinate system, the formula of spherical point coordinates is calculated by using longitude Angle, dimension Angle and radius as follows:

According to the above formula to calculate spherical vertex coordinates of the code implementation, where ANGLE_SPAN for step length, RADIUS for RADIUS, RADIAN for RADIAN conversion.

// Build vertex coordinates
for (float vAngle = 90; vAngle > - 90.; vAngle = vAngle - ANGLE_SPAN) {// The vertical direction is calculated every ANGLE_SPAN
    for (float hAngle = 360; hAngle > 0; hAngle = hAngle - ANGLE_SPAN) {// The horizontal direction is calculated every ANGLE_SPAN
        double xozLength = RADIUS * cos(RADIAN(vAngle));
        float x1 = (float) (xozLength * cos(RADIAN(hAngle)));
        float z1 = (float) (xozLength * sin(RADIAN(hAngle)));
        float y1 = (float) (RADIUS * sin(RADIAN(vAngle)));
        xozLength = RADIUS * cos(RADIAN(vAngle - ANGLE_SPAN));
        float x2 = (float) (xozLength * cos(RADIAN(hAngle)));
        float z2 = (float) (xozLength * sin(RADIAN(hAngle)));
        float y2 = (float) (RADIUS * sin(RADIAN(vAngle - ANGLE_SPAN)));
        xozLength = RADIUS * cos(RADIAN(vAngle - ANGLE_SPAN));
        float x3 = (float) (xozLength * cos(RADIAN(hAngle - ANGLE_SPAN)));
        float z3 = (float) (xozLength * sin(RADIAN(hAngle - ANGLE_SPAN)));
        float y3 = (float) (RADIUS * sin(RADIAN(vAngle - ANGLE_SPAN)));
        xozLength = RADIUS * cos(RADIAN(vAngle));
        float x4 = (float) (xozLength * cos(RADIAN(hAngle - ANGLE_SPAN)));
        float z4 = (float) (xozLength * sin(RADIAN(hAngle - ANGLE_SPAN)));
        float y4 = (float) (RADIUS * sin(RADIAN(vAngle)));

		// Four points of a small rectangular sphere
        vec3 v1(x1, y1, z1);
        vec3 v2(x2, y2, z2);
        vec3 v3(x3, y3, z3);
        vec3 v4(x4, y4, z4);

        // Construct the first triangle
        m_VertexCoords.push_back(v1);
        m_VertexCoords.push_back(v2);
        m_VertexCoords.push_back(v4);
        // Construct the second trianglem_VertexCoords.push_back(v4); m_VertexCoords.push_back(v2); m_VertexCoords.push_back(v3); }}Copy the code

The texture coordinate calculation corresponding to spherical coordinate is actually the calculation of fixed row and column grid points.

// Construct the texture coordinates, the rectangle after the spherical expansion
int width = 360 / ANGLE_SPAN;/ / the number of columns
int height = 180 / ANGLE_SPAN;/ / the number of rows
float dw = 1.0 f / width;
float dh = 1.0 f / height;
for (int i = 0; i < height; i++) {
    for (int j = 0; j < width; j++) {
        // Each small rectangle consists of two triangles with six points
        float s = j * dw;
        float t = i * dh;
        vec2 v1(s, t);
        vec2 v2(s, t + dh);
        vec2 v3(s + dw, t + dh);
        vec2 v4(s + dw, t);

        // Construct the first triangle
        m_TextureCoords.push_back(v1);
        m_TextureCoords.push_back(v2);
        m_TextureCoords.push_back(v4);
        // Construct the second trianglem_TextureCoords.push_back(v4); m_TextureCoords.push_back(v2); m_TextureCoords.push_back(v3); }}Copy the code

Test the accuracy of the constructed sphere by rendering the spherical mesh with OpenGL lines.

Render panoramic video

After the vertex coordinates and texture coordinates are calculated, all that is left is a simple texture map (texture map). If you do not know texture mapping, you can check out this article texture mapping.

The vertex coordinates and texture coordinates initialize the VAO.

// Generate VBO Ids and load the VBOs with data
glGenBuffers(2, m_VboIds);
glBindBuffer(GL_ARRAY_BUFFER, m_VboIds[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec3) * m_VertexCoords.size(), &m_VertexCoords[0], GL_STATIC_DRAW);

glBindBuffer(GL_ARRAY_BUFFER, m_VboIds[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec2) * m_TextureCoords.size(), &m_TextureCoords[0], GL_STATIC_DRAW);

// Generate VAO Id
glGenVertexArrays(1, &m_VaoId);
glBindVertexArray(m_VaoId);

glBindBuffer(GL_ARRAY_BUFFER, m_VboIds[0]);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0.3, GL_FLOAT, GL_FALSE, sizeof(vec3), (const void *)0);
glBindBuffer(GL_ARRAY_BUFFER, GL_NONE);

glBindBuffer(GL_ARRAY_BUFFER, m_VboIds[1]);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1.2, GL_FLOAT, GL_FALSE, sizeof(vec2), (const void *)0);
glBindBuffer(GL_ARRAY_BUFFER, GL_NONE);

glBindVertexArray(GL_NONE);
Copy the code

Draw the video screen.

// Use the program object
glUseProgram (m_ProgramObj);

glBindVertexArray(m_VaoId);

GLUtils::setMat4(m_ProgramObj, "u_MVPMatrix", m_MVPMatrix);

// Bind the texture
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_TextureId);
GLUtils::setFloat(m_ProgramObj, "s_TextureMap".0);

glDrawArrays(GL_TRIANGLES, 0, m_VertexCoords.size());
Copy the code

Let’s draw a normal video and see what it looks like.

Finally, draw the panoramic video.

The source code

LearnFFmpeg source code

Technical communication

Technical exchange/get source code can add my wechat: byte-flow