An overview of the

It was very interesting to see the openeye 3D implementation of The Freeze.com APP released by The Freeze.com team. Soon, Android developers in the community offered different implementations of Flutter, Android native, Android Jetpack Compose, and so on.

Soon I saw a funny comment:

Since the client is all rolled up like this, let’s break it down and make up the implementation of Android OpenGL as well. Graphics may be late, but it’s not missing.

The result is as follows (image source), this wave is true to participate in the naked eye 3D client slam in the community:

Introduction to Principle & Advantages of OpenGL

The principle of naked eye 3D is disassembled very clearly in other articles. In line with the principle of not repeating the wheel, part of the content of Nayuta and Fu Xi’s article is quoted here. Thank you again.

The essence of naked eye 3D is to divide the entire image structure into three layers: top, middle, and bottom. When the phone rotates left and right up and down, the upper and bottom pictures move in opposite directions, while the middle layer does not move, giving a visual impression of 3D:

That is, the effect is made up of the following three images:

prospects Medium shot (text is white) background

Next, how to sense the rotation state of the mobile phone and move the three layers of pictures accordingly? Of course, the device itself provides a variety of excellent sensors, through the sensor constantly call back to obtain the rotation state of the device, the UI can be rendered accordingly.

The author finally chose the OpenGL API on the Android platform for rendering. The direct reason is that there is no need to duplicate the existing implementation scheme in the community.

Another important reason is that GPU is more suitable for graphics and image processing. A large number of zoom and displacement operations in naked eye 3D effects can be described in the Java layer through a matrix, which can be handed to GPU for processing in the shader appllet — therefore, In theory, OpenGL’s rendering performance is better than the others.

The focus of this article is to describe the idea of OpenGL drawing description, so the following only show part of the core code, interested in the specific implementation of the reader can refer to the link at the end of the article.

The specific implementation

1. Draw still pictures

First of all, three pictures need to be statically drawn in sequence, which involves the use of a large number of OpenGL APIS. If you are not familiar with reading this section, you can skim it to clear your mind.

Let’s start with the vertex and slice shader code, which defines how image textures are rendered on the GPU:

// Vertex shader code
// Vertex coordinates
attribute vec4 av_Position;
// Texture coordinates
attribute vec2 af_Position;
uniform mat4 u_Matrix;
varying vec2 v_texPo;

void main(a) {
    v_texPo = af_Position;
    gl_Position =  u_Matrix * av_Position;
}
Copy the code
// Vertex shader code
// Vertex coordinates
attribute vec4 av_Position;
// Texture coordinates
attribute vec2 af_Position;
uniform mat4 u_Matrix;
varying vec2 v_texPo;

void main(a) {
    v_texPo = af_Position;
    gl_Position =  u_Matrix * av_Position;
}
Copy the code

After defining the Shader, initialize the Shader applet and load the image textures into the GPU in turn when creating the GLSurfaceView:

public class My3DRenderer implements GLSurfaceView.Renderer {
  
  @Override
  public void onSurfaceCreated(GL10 gl, EGLConfig config) {
      1. Load the shader applet
      mProgram = loadShaderWithResource(
              mContext,
              R.raw.projection_vertex_shader,
              R.raw.projection_fragment_shader
      );
      
      // ... 
      
      // 2. Pass the three cut map textures into the GPU in turn
      this.texImageInner(R.drawable.bg_3d_back, mBackTextureId);
      this.texImageInner(R.drawable.bg_3d_mid, mMidTextureId);
      this.texImageInner(R.drawable.bg_3d_fore, mFrontTextureId); }}Copy the code

The next step is to define the viewport size. Since it is a 2D image transformation and the aspect ratio of the cut graph and the mobile phone screen is basically the same, we can simply define an orthogonal projection of the identity matrix:

public class My3DRenderer implements GLSurfaceView.Renderer {
  
    // Projection matrix
    private float[] mProjectionMatrix = new float[16];

    @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {
        // Set the viewport size
        GLES20.glViewport(0.0, width, height);
        // Image and screen aspect ratio is basically the same, simplify processing, use an identity matrix
        Matrix.setIdentityM(mProjectionMatrix, 0); }}Copy the code

The last is rendering. Readers need to understand that the logic of rendering the images of the first, middle and last three layers is basically the same, with only two differences: the image itself is different and the geometric transformation of the image is different.

public class My3DRenderer implements GLSurfaceView.Renderer {
  
    private float[] mBackMatrix = new float[16];
    private float[] mMidMatrix = new float[16];
    private float[] mFrontMatrix = new float[16];

    @Override
    public void onDrawFrame(GL10 gl) {
        GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
        GLES20.glClearColor(0.0 f.0.0 f.0.0 f.1.0 f);

        GLES20.glUseProgram(mProgram);
        
        // Draw the background, middle and foreground in sequence
        this.drawLayerInner(mBackTextureId, mTextureBuffer, mBackMatrix);
        this.drawLayerInner(mMidTextureId, mTextureBuffer, mMidMatrix);
        this.drawLayerInner(mFrontTextureId, mTextureBuffer, mFrontMatrix);
    }
    
    private void drawLayerInner(int textureId, FloatBuffer textureBuffer, float[] matrix) {
        // 1. Bind the image texture
        GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId);
        // 2. Matrix transformation
        GLES20.glUniformMatrix4fv(uMatrixLocation, 1.false, matrix, 0);
        // ...
        // 3. Perform drawing
        GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0.4); }}Copy the code

Refer to the code of drawLayerInner, which is used to draw a single layer of images, where the textureId parameter corresponds to different images and the matrix parameter corresponds to different geometric transformations.

Now that we’ve finished drawing the image statically, it looks like this:

Next we need to plug in the sensor and define the geometry transformation of the image at different levels to make the image move.

2. Make pictures move

First of all, we need to register the sensor on the Android platform to monitor the rotation state of the phone and get the rotation Angle of the XY axis of the phone.

// 2.1 Register sensor
mSensorManager = (SensorManager) context.getSystemService(Context.SENSOR_SERVICE);
mAcceleSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
mMagneticSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
mSensorManager.registerListener(mSensorEventListener, mAcceleSensor, SensorManager.SENSOR_DELAY_GAME);
mSensorManager.registerListener(mSensorEventListener, mMagneticSensor, SensorManager.SENSOR_DELAY_GAME);

// 2.2 Accept rotation state continuously
private final SensorEventListener mSensorEventListener = new SensorEventListener() {
    @Override
    public void onSensorChanged(SensorEvent event) {
        / /... Omit specific code
        float[] values = new float[3];
        float[] R = new float[9];
        SensorManager.getRotationMatrix(R, null, mAcceleValues, mMageneticValues);
        SensorManager.getOrientation(R, values);
        // The deflection Angle of x axis
        float degreeX = (float) Math.toDegrees(values[1]);
        // The deflection Angle of the y axis
        float degreeY = (float) Math.toDegrees(values[2]);
        // The z-axis deflection Angle
        float degreeZ = (float) Math.toDegrees(values[0]);
        
        // Get the rotation Angle of the xy axis and perform the matrix transformationupdateMatrix(degreeX, degreeY); }};Copy the code

Note that since we only need to control the left/right and up/down movement of the image, we only need to focus on the deflection angles of the X and Y axes of the device itself:

Now that we have the deflection angles for the x and y axes, we can define the displacement of the image.

However, if the image is directly displaced, there will be no texture data on the other side of the image after displacement, resulting in the phenomenon of black edge in the rendering result. To avoid this problem, we need to enlarge the image from the center point by default to ensure that the image will not exceed its own boundary in the process of image movement.

In other words, when we first enter, we must only see parts of the image. Scale each layer to enlarge the image. The display window is fixed, so you can only see the middle of the image at first. (You don’t need to use the middle layer, because the middle layer itself is not moving, so you don’t need to zoom in)

The treatment here refers to this article by Nayuta, which has been very clear and strongly recommended for readers to read.

With this in mind, we can understand that the effect of naked eye 3D is actually the transformation of the scale and displacement of the image of different levels. The following is the code to obtain the geometric transformation respectively:

public class My3DRenderer implements GLSurfaceView.Renderer {
  
    private float[] mBackMatrix = new float[16];
    private float[] mMidMatrix = new float[16];
    private float[] mFrontMatrix = new float[16];

    /** * Gyro data callback, update each level of transformation matrix. **@paramThe image should move up and down * as the axis rotates@paramDegreeY y axis rotation Angle, the image should move left and right */
    private void updateMatrix(@floatrange (from = -180.0f, to = 180.0f) float degreeX,
                              @floatrange (from = -180.0f, to = 180.0f) float degreeY) {
        / /... Other processing

        // Background transform
        // 1. Maximum displacement
        float maxTransXY = MAX_VISIBLE_SIDE_BACKGROUND - 1f;
        // 2
        float transX = ((maxTransXY) / MAX_TRANS_DEGREE_Y) * -degreeY;
        float transY = ((maxTransXY) / MAX_TRANS_DEGREE_X) * -degreeX;
        float[] backMatrix = new float[16];
        Matrix.setIdentityM(backMatrix, 0);
        Matrix.translateM(backMatrix, 0, transX, transY, 0f);                    / / 2. Translation
        Matrix.scaleM(backMatrix, 0, SCALE_BACK_GROUND, SCALE_BACK_GROUND, 1f);  / / 1. Zoom
        Matrix.multiplyMM(mBackMatrix, 0, mProjectionMatrix, 0, backMatrix, 0);  // 3. Orthogonal projection

        // Mid-shot transform
        Matrix.setIdentityM(mMidMatrix, 0);

        // Change the foreground
        // 1. Maximum displacement
        maxTransXY = MAX_VISIBLE_SIDE_FOREGROUND - 1f;
        // 2
        transX = ((maxTransXY) / MAX_TRANS_DEGREE_Y) * -degreeY;
        transY = ((maxTransXY) / MAX_TRANS_DEGREE_X) * -degreeX;
        float[] frontMatrix = new float[16];
        Matrix.setIdentityM(frontMatrix, 0);
        Matrix.translateM(frontMatrix, 0, -transX, -transY - 0.10 f.0f);            / / 2. Translation
        Matrix.scaleM(frontMatrix, 0, SCALE_FORE_GROUND, SCALE_FORE_GROUND, 1f);    / / 1. Zoom
        Matrix.multiplyMM(mFrontMatrix, 0, mProjectionMatrix, 0, frontMatrix, 0);  // 3. Orthogonal projection}}Copy the code

There are a few details that need to be worked out in this code.

3. A few counterintuitive details

3.1 Rotation direction ≠ displacement direction

First of all, equipment displacement direction of the direction of rotation and pictures are opposite, for example, when a device along the X axis rotation, to the user, corresponding to the foreground of the picture should be moved up and down, in turn, the equipment along the Y axis rotation, image should move around (refer to, the classmate didn’t quite understand the images to deepen understanding of gyroscope) :

// The rotation direction of the device is opposite to the displacement direction of the image
float transX = ((maxTransXY) / MAX_TRANS_DEGREE_Y) * -degreeY;
float transY = ((maxTransXY) / MAX_TRANS_DEGREE_X) * -degreeX;
// ...
Matrix.translateM(backMatrix, 0, transX, transY, 0f); 
Copy the code

3.2 Default rotation Angle ≠ 0°

Secondly, when defining the maximum rotation Angle, it should not be considered that rotation Angle = 0° is the default value. What does that mean? The default height difference between the left and right sides of the device is 0 when the Y axis is 0 degreeY = 0. This fits user custom and is relatively easy to understand. Therefore, we can define the maximum left and right rotation Angle, such as Y ∈ (-45°, 45°), beyond which the image will move to the edge.

However, when the X axis is rotated at 0 degrees, degreeX = 0, this means that the height difference between the top and bottom of the device is 0. You can understand that the device is placed on a horizontal desktop, which is not the norm for most users. In contrast, The device screen is parallel to the human face for most scenes:

Therefore, the maximum rotation Angle range of X and Y axis should be defined separately in the code:

private static final float USER_X_AXIS_STANDARD = -45f;
private static final float MAX_TRANS_DEGREE_X = 25f;   // The maximum rotation Angle of X-axis ∈ (-20°, -70°)

private static final float USER_Y_AXIS_STANDARD = 0f;
private static final float MAX_TRANS_DEGREE_Y = 45f;   // Maximum rotation Angle of Y ∈ (-45°, 45°)
Copy the code

With these counterintuitive details worked out, we’re almost done with naked eye 3D.

Parkinson’s disease?

We’re almost done, but we still need to deal with 3D jitters:

As shown in the figure, because the sensor is too sensitive, even if the device is held smoothly, slight changes in the three directions of XYZ will affect the user’s actual experience and bring about self-doubt of Parkinson’s disease to the user.

Traditional OpenGL and Android apis seem unable to solve this problem, but someone on GitHub has provided another idea.

Those familiar with signal processing know that in order to provide a smooth form of signal by eliminating short-term fluctuations and retaining long-term development trends, low-pass filter can be used to ensure that signals below the cut-off frequency can pass, while signals above the cut-off frequency cannot pass.

Therefore, someone built this warehouse to filter out small noise signals by adding low-pass filtering to Android sensors to achieve a relatively stable effect:

private final SensorEventListener mSensorEventListener = new SensorEventListener() {
    @Override
    public void onSensorChanged(SensorEvent event) {
        // Add low pass filter to sensor data
        if (event.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
            mAcceleValues = lowPass(event.values.clone(), mAcceleValues);
        }
        if (event.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD) {
            mMageneticValues = lowPass(event.values.clone(), mMageneticValues);
        }
      
        / /... Omit specific code
        // The deflection Angle of x axis
        float degreeX = (float) Math.toDegrees(values[1]);
        // The deflection Angle of the y axis
        float degreeY = (float) Math.toDegrees(values[2]);
        // The z-axis deflection Angle
        float degreeZ = (float) Math.toDegrees(values[0]);
        
        // Get the rotation Angle of the xy axis and perform the matrix transformationupdateMatrix(degreeX, degreeY); }};Copy the code

Finally, we achieved what we wanted:

Reference & source code address

Finally, the relevant information mentioned in this paper, thanks again to the forerunner to pay practice.

1. The realization of the naked eye 3D effect of The APP @Freely big front end team

2. Take it! The Flutter mimicking the naked eye 3D effect @Nayuta App

3. Compose coming! Free 3D effect with naked eyes @ Fu 11

4. GitHub: Low-Pass-Filter-To-Android-Sensors

For all the source code for this article, see here.

About me

If you think this article is valuable to you, welcome to ❤️, and also welcome to follow my blog or GitHub.

  • My Android learning system
  • About the article error correction
  • About paying for knowledge
  • About the Reflections series