Translation,

The original title: OpenGL Android Lesson # One: Getting Started the original link: www.learnopengles.com/android-les…


This is the first tutorial for using OpenGL ES2 on Android. In this lesson, we will follow the code step by step and learn how to create an OpenGL ES 2 and draw it to the screen. We’ll also learn what shaders are, how they work, and how to use matrices to transform scenes into images you see on the screen. Finally, you need to add a note in the manifest file that you are using OpenGL ES 2 to inform the Android App Market of supported devices that are visible.

An introduction to

We’ll walk through all of the following code and explain what each part does. You can create your own project by copying snippets from each, or you can download the completed project at the end of this article. Create your Android project in a development tool such as Android Studio. The name doesn’t matter. I changed the name of MainActivity to LessonOneActivity for this course.

Let’s look at this code:

/** Keep references to GLSurfaceView */
private GLSurfaceView mGLSurfaceView;
Copy the code

This GLSurfaceView is a special View that manages the OpenGL interface for us and draws it on the Android View system. It also adds a number of features to make it easier to use OpenGL, including the following:

  • It provides OpenGL with a dedicated shader thread, so the main thread doesn’t let up
  • It supports continuous or on-demand rendering
  • It uses EGL (the interface between OpenGL and the underlying system window) to handle screen Settings

GLSurfaceView makes setting up and using OpenGL on Android relatively easy

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    mGLSurfaceView = new GLSurfaceView(this);
    // Check whether the system supports OpenGL ES 2.0
    final ActivityManager activityManager = (ActivityManager) getSystemService(Context.ACTIVITY_SERVICE);
    final ConfigurationInfo configurationInfo = activityManager.getDeviceConfigurationInfo();
    final boolean supportsEs2 = configurationInfo.reqGlEsVersion >= 0x20000;

    if (supportsEs2) {
        // Request an OpenGL ES 2.0 compliant context
        mGLSurfaceView.setEGLContextClientVersion(2);
        // Set up our Demo renderer, defined later
        mGLSurfaceView.setRenderer(new LessonOneRenderer());
    } else {
        // If you want to support both ES 1.0 and 2.0, here you can create OpenGL ES 1.0-compatible renderers
        return;
    }
    setContentView(mGLSurfaceView);
}
Copy the code

In the onCreate() method is an important part of where we create the OpenGL context and everything starts happening. In our onCreate() method, we first create the GLSurfaceView instance after calling super.oncreate (). Then we need to figure out if the system supports OpenGL ES 2. To do this, we get an ActivityManager instance that allows us to interact with the global system state. We then use it to get device configuration information, which will tell us whether the device supports OpenGL ES 2 or not. We can also support OpenGL ES 1.x by passing in different renderers, although we need to write different code because of different apis. For this lesson we’re just going to focus on supporting OpenGL ES 2.

Once we know whether the device supports OpenGL ES 2, we tell GLSurfaceView to be compatible with OpenGL ES 2 and pass in our custom renderer. This renderer is called whenever the interface is adjusted or a new frame is drawn.

Finally, we call setContentView() to set the GLSurfaceView to display content, which tells Android that the active content should be filled by our OpenGL interface. To get started with OpenGL, it’s that simple.

@Override
protected void onResume(a) {
    super.onResume();
    // The Activity must call the onResume method of the GLSurfaceView in onResume
    mGLSurfaceView.onResume();
}

@Override
protected void onPause(a) {
    super.onPause();
    // The Activity must call the onPause method of the GLSurfaceView in onPause
    mGLSurfaceView.onPause();
}
Copy the code

GLSurfaceView asks us to call the onResume() and onPause() methods of ActivityonResume() and onPause() respectively after their parent methods have been called. We add calls here to complete our Activity.

Visualizing the 3D World

In this section, we’ll look at how to make OpenGL ES 2 work, and how we can draw things on the screen. In the Activity we pass in our custom glSurfaceView. Renderer to the GLSurfaceView, which will be defined here. The renderer has three important methods that will be called automatically whenever a system event occurs:

public void onSurfaceCreated(GL10 gl, EGLConfig config)

Called when the interface is first created, or if we lose the interface context and are later rebuilt by the system.

public void onSurfaceChanged(GL10 gl, int width, int height)

Called whenever the interface changes; For example, switching from vertical to landscape is also called after the screen is created.

public void onDrawFrame(GL10 gl)

Called whenever a new frame is drawn.

You may have noticed that an instance of GL10 is passed in with the name GL. When drawing with OpengGL ES 2, we can’t use it; We use static methods of the GLES20 class instead. This GL10 parameter is only here, because the same interface is used in OpenGL ES 1.x.

Before our renderer can display anything, we need something to display. In OpenGL ES 2, we pass content by specifying an array of numbers. These numbers can indicate location, color, or whatever we need. In this Demo, we will show three triangles.

// New class member
private final FloatBuffer mTriangle1Verticels;
private final FloatBuffer mTriangle2Verticels;
private final FloatBuffer mTriangle3Verticels;

/** How many bytes per Float */
private final int mBytePerFloat = 4;

/** * Initial Model data */
public LessonOneRenderer(a) {
    // This triangle is red, blue and green
    final float[] triangle1VerticesData = {
        // X, Y, Z,
        // R, G, B, A
        -0.5 F, -0.25 F.0.0 F.1.0 F.0.0 F.0.0 F.1.0 F.0.5 F, -0.25 F.0.0 F.0.0 F.0.0 F.1.0 F.1.0 F.0.0 F.0.559016994 F.0.0 F.0.0 F.1.0 F.0.0 F.1.0 F}; .// Initialize the buffermTriangle1Verticels = ByteBuffer.allocateDirect(triangle1VerticesData.length * mBytePerFloat).order(ByteOrder.nativeOrder()).asFloatBuffer(); . mTriangle1Verticels.put(triangle1VerticesData).position(0); . }Copy the code

So, what does all this mean? If you’ve ever used OpenGL 1, you’re probably used to doing this:

glBegin(GL_TRIANGLES);
glVertex3f(-0.5 f, -0.25 f.0.0 f);
glColor3f(1.0 f.0.0 f.0.0 f); . glEnd();Copy the code

This approach doesn’t work in OpenGL ES 2. Instead of defining points through a bunch of method calls, we define an array. Let’s look at our array again:

final float[] triangle1VerticesData = {
                // X, Y, Z,
                // R, G, B, A
                -0.5 f, -0.25 f.0.0 f.1.0 f.0.0 f.0.0 f.1.0 f. };Copy the code

The point shown above represents a triangle. We have set the first three numbers to represent position (X,Y,Z) and the next four numbers to represent color (red, green, blue, transparency). You don’t have to worry too much about defining this array; Just remember that when we want to draw something in OpenGL ES 2, we need to pass the data in blocks, not one at a time.

Understanding buffers

// Initialize the buffermTriangle1Verticels = ByteBuffer.allocateDirect(triangle1VerticesData.length * mBytePerFloat).order(ByteOrder.nativeOrder()).asFloatBuffer(); .Copy the code

We coded in Java on Android, but the underlying implementation of OpengGL ES 2 is actually written in C. Before we can pass the data to OpenGL, we need to convert it into a form it can understand. Java and Native systems may not store their bytes in the same order, so we use a special buffer class and create a ByteBuffer large enough to hold our data and tell it to use Native byte order to store the data. We then convert it to FloatBuffer so that we can use it to hold floating-point data. Finally, we copy the array into the buffer.

This buffer stuff might look messy, but remember, we need to do an extra step before passing the data to OpenGL. Our buffer is now ready to be used to pass data into OpenGL.

In addition,Float buffers are slow on Froyo, on Gingerbread slowly, so you may not want to replace them often.

Understand the matrix

// New class definition

/** * Store the view matrix. Think of it as a camera through which we transform world space into eye space * which locates things relative to our eyes */
private float[] mViewMatrix = new float[16];

@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
    // Set the background clearing color to grey
    GLES20.glClearColor(0.5 F.0.5 F.0.5 F.0.5 F);

    // Place the eye behind the origin
    final float eyeX = 0.0 F;
    final float eyeY = 0.0 F;
    final float eyeZ = 1.5 F;

    // Where do our eyes look
    final float lookX = 0.0 F;
    final float lookY = 0.0 F;
    final float lookZ = -5.0 F;

    // Set our vector, which is where we point our head when holding the camera
    final float upX = 0.0 F;
    final float upY = 1.0 F;
    final float upZ = 0.0 F;

    // Specific scenario: Put the phone on the desktop and look at the screen

    // Set the view matrix to represent the position of the camera
    // Note: OpenGL 1 uses ModelView matrix, which is a combination of model and View matrix.
    // In OpenGL2, we choose to trace these matrices separately
    Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ); . }Copy the code

Another interesting topic is matrices! These will be your best friends whenever you program in 3D. Therefore, you need to know them well.

When our interface is created, the first thing we do is set the cleanup color to gray. The alpha part is also set to gray, but there is no alpha mixing in our course, so this value is not used. We only need to set the clean color once, and we won’t change it after that.

The second thing we’re going to do is set up the View matrix. We use several different kinds of matrices that do something important:

  1. Model matrix, which is used to place the model somewhere in the world. For example, if you have a model car and you want to place it a thousand meters east, you will use a matrix model to do this.
  2. View matrix, which represents the camera. If we want to see a car 1,000 meters east, we also need to move a kilometer east (another way to think about it is that we stay still and the world moves a kilometer west). We use the view matrix to do this.
  3. It’s a projection matrix. Since our screen is flat, we need to make a final transformation to “project” our view onto our screen and get a nice 3D view. That’s what the projection matrix is for

A good explanation can be found in SongHo’s OpenGL tutorial. I suggest you read it a few times until you get the idea right; Don’t worry, I’ve read it several times too!

In OpenGL 1, the model and view matrices are combined and the camera is assumed to be in (0,0,0) coordinates and facing the Z axis.

Instead of building these matrices manually, Android has a Matrix helper class that does the heavy lifting for us. Here, I create a view matrix for the camera that is behind the origin and facing away.

Define vertex and fragment shaders

final String vertexShader =
        "uniform mat4 u_MVPMatrix; \n" + // a constant representing the composite Model, View, and Projection matrices
        "attribute vec4 a_Position; \n" + // We will pass in the position information for each vertex
        "attribute vec4 a_Color; \n" + // We will pass in the color information for each vertex

        "varying vec4 v_Color; \n" + // It will be passed to the fragment shader

        "void main() \n" + // Vertex shader entry
        "{ \n" +
        " v_Color = a_Color; \n" + // Pass the color to the fragment shader
                                            // It will interpolate within the triangle
        " gl_Position = u_MVPMatrix \n" + // gl_Position is a special variable to store the final position
        " * a_Position \n" + // Multiply the vertices by the matrix to get the final point of the normalized screen coordinates
        "} \n";
Copy the code

In OpenGL ES 2 anything we want to show on screen has to go through vertex and fragment shaders first, but these shaders are not as complex as they seem. Vertex shaders perform operations on each vertex and use the results of those operations in fragment shaders to do additional per-pixel calculations.

Each shader basically consists of input, output, and a program. First we define a uniform, which is a combination matrix that contains all the transformations. It is a constant for all vertices and is used to project them onto the screen. We then define position and color attributes that will be read from the cache we defined earlier and specify the position and color of each vertex. We then defined a variable (VARYING) that interpolates in the triangle and is passed to the fragment shader. When it runs into the fragment shader, it will hold an interpolation for each pixel.

Let’s say we define a triangle where each point is red, green, and blue, and we resize it so that it takes up 10 pixels of the screen. When the fragment shader runs, it will contain a different variable (VARYING) color for each pixel. At one point, the variable varying will be red, but it may be a more purple color between red and blue.

In addition to setting the color, we also tell OpenGL where the vertices will end up on the screen. Then we define the fragment shader:

final String fragmentShader =
        "precision mediump float; \n" + // We set the default precision to medium, we don't need the high precision in the fragment shader
        "varying vec4 v_Color; \n" + // This is the vertex shader color interpolated from each segment of the triangle
        "void main() \n" + // Fragment shader entry
        "{ \n" +
        " gl_FragColor = v_Color; \n" + // Pass the color directly
        "} \n";
Copy the code

This is a fragment shader, and it puts things on the screen. In this shader, we get the color of the variable (VARYING) from the vertex shader and pass it directly to OpenGL. The point has been interpolated pixel by pixel because the fragment shader will run for each pixel point to be drawn.

More information: OpenGL ES 2 API Quick Reference card

Load the shader into OpenGL

// Load the vertex shader
int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);
if(vertexShaderHandle ! =0) {
    // Pass the vertex shader source code
    GLES20.glShaderSource(vertexShaderHandle, vertexShader);
    // Compile the vertex shader
    GLES20.glCompileShader(vertexShaderHandle);

    // Get the compile status
    final int[] compileStatus = new int[1];
    GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

    // Remove the shader if compilation fails
    if (compileStatus[0] = =0) {
        GLES20.glDeleteShader(vertexShaderHandle);
        vertexShaderHandle = 0; }}if (vertexShaderHandle == 0) {
    throw new RuntimeException("Error creating vertex shader.");
}
Copy the code

First, we create a shader object. If successful, we get a reference to the object. We then use this reference to pass in the shader source and compile it. We can get compiled from OpenGL state of success, failure. We can use GLES20 glGetShaderInfoLog (shader) find out why. We follow the same steps to load the fragment shader.

Link vertex and fragment shaders into a program

// Create a program object and put a reference in it
int programHandle = GLES20.glCreateProgram();
if(programHandle ! =0) {
    // Bind the vertex shader to the program object
    GLES20.glAttachShader(programHandle, vertexShaderHandle);
    // Bind the fragment shader to the program object
    GLES20.glAttachShader(programHandle, fragmentShaderHandle);
    // Bind attributes
    GLES20.glBindAttribLocation(programHandle, 0."a_Position");
    GLES20.glBindAttribLocation(programHandle, 1."a_Color");
    // Connect two shaders to the program
    GLES20.glLinkProgram(programHandle);
    // Get the connection status
    final int[] linkStatus = new int[1];
    GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);
    // If the connection fails, delete the program
    if (linkStatus[0] = =0) {
        GLES20.glDeleteProgram(programHandle);
        programHandle = 0; }}if (programHandle == 0) {
    throw new RuntimeException("Error creating program.");
}
Copy the code

Before we can use vertex and fragment shaders, we need to bind them to a program that connects the output of the vertex shader to the input of the fragment shader. This is why we pass input from the program and use shaders to draw shapes.

We create a program object, if successfully bound shader. We want to pass in location and color as properties, so we need to bind those properties. Then we wire the shaders together.

// New class member
/** this will be used to pass the transformation matrix */
private int mMVPMatrixHandle;
/** Used to pass model location information */
private int mPositionHandle;
/** Used to pass model color information */
private int mColorHandle;
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {...// Sets the program reference, which will be used later when passing values to the program
    mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
    mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
    mColorHandle = GLES20.glGetAttribLocation(programHandle, "a_Color");

    // Tell OpenGL to use this program when rendering
    GLES20.glUseProgram(programHandle);
}
Copy the code

After we successfully connect the program, we still have a few tasks to complete so that we can actually use it. The first task is to get the reference, because we’re passing data into the program. And then we’re going to tell OpenGL to use our program when we draw. Since we’re only using one program in this tutorial, we can put it in the onSurfaceCreated() method instead of onDrawFrame()

Set perspective projection

// New class member
// Store the projection matrix, which is used to project the scene to a 2D perspective
private float[] mProjectionMatrix = new float[16];

@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
    // Set the OpenGL interface to the same size as the current view
    GLES20.glViewport(0.0, width, height);

    // Create a new perspective projection matrix with the same height and the height changing according to aspect ratio
    final float ratio = (float) width / height;
    final float left = -ratio;
    final float right = ratio;
    final float bottom = -1.0 F;
    final float top = 1.0 F;
    final float near = 1.0 F;
    final float far = 10.0 F;

    Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
Copy the code

The onSurfaceChanged() method is called at least once, and every time the interface changes. Because we need to reset the projection matrix every time the interface changes, the onSurfaceChanged() method is an ideal place to do so.

Draw something to the screen!

// New class member
// Store the model matrix used to move the models from the object space (think of each model as starting at the center of the universe) to the world space
private float[] mModelMatrix = new float[16];

@Override
public void onDrawFrame(GL10 gl) {
    GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

    // Complete the rotation every 10 seconds
    long time = SystemClock.uptimeMillis() % 10000L;
    float angleDegrees = (360.0 F / 10000.0 F) * ((int)time);

    // Draw triangles
    Matrix.setIdentityM(mModelMatrix, 0);
    Matrix.rotateM(mModelMatrix, 0, angleDegrees, 0.0 F.0.0 F.1.0 F); drawTriangle(mTriangle1Verticels); . }Copy the code

This is what is actually displayed on the screen. We clean the screen so we don’t get any weird mirror effects, we want our triangles to have smooth animations on the screen, and it’s usually better to use time rather than frame rate.

The actual drawing is done in the drawTriangle() method

// New class member
/** Allocates storage for the final composition matrix, which will be used to pass in the shader program */
private float[] mMVPMatrix = new float[16];

/** How many bytes each vertex is made up of, each time it takes such a big step (each vertex has 7 elements, 3 for position, 4 for color, 7 * 4 = 28 bytes) */
private final int mStrideBytes = 7 * mBytePerFloat;

/** Position data offset */
private final int mPositionOffset = 0;

/** The size of an element's position data */
private final int mPositionDataSize = 3;

/** Color data offset */
private final int mColorOffset = 3;

/** The size of an element's color data */
private final int mColorDataSize = 4;

/** * Draws a triangle from the given vertex data *@paramATriangleBuffer A buffer containing vertex data */
private void drawTriangle(FloatBuffer aTriangleBuffer) {
    aTriangleBuffer.position(mPositionOffset);
    GLES20.glVertexAttribPointer(
            mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
            mStrideBytes, aTriangleBuffer);

    GLES20.glEnableVertexAttribArray(mPositionHandle);

    // Pass in the color information
    aTriangleBuffer.position(mColorOffset);
    GLES20.glVertexAttribPointer(mColorHandle, mColorDataSize, GLES20.GL_FLOAT, false,
            mStrideBytes, aTriangleBuffer);

    GLES20.glEnableVertexAttribArray(mColorHandle);

    // Multiply the view Matrix by the model Matrix and store the result into MVP Matrix (model * view)
    Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

    // Multiply the projection Matrix by the projection Matrix and store the result to MVP Matrix (Model * View * projection)
    Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

    GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1.false, mMVPMatrix, 0);
    GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0.3);

}
Copy the code

Do you remember those buffers we defined when we first created the renderer? We can finally use them. We need to use GLES20. GlVertexAttribPointer () to tell OpenGL how to use these data.

Let’s look at the first use

aTriangleBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(
        mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
        mStrideBytes, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
Copy the code

We set the buffer position offset, which is at the beginning of the buffer. We then tell OpenGL to take this data and feed it to the vertex shader and apply it to the position property (a_Position). We also need to tell OpenGL how many elements there are between each vertex or magnitude.

Note: The Stride needs to be defined as bytes, and although we have seven elements between each vertex (three for positions and four for colors), we actually have 28 bytes, because each float is four bytes. You may not have done anything wrong by forgetting this step, but you’ll wonder why you don’t see anything on your screen.

Finally, we use the vertex property, and then we use the next property. If we go a little bit further down we build a combination matrix, and we project the points onto the screen. We can also do this in the vertex shader, but since it only needs to be done once we can also cache the results only. We use GLES20. GlUniformMatrix4fv () method will eventually matrix was introduced into the vertex shader. Gles20.gldrawarrays () converts our points into triangles and draws them on the screen.

conclusion

Shout! This is an important lesson, thank you if you have completed this lesson! We learned how to create an OpenGL context, pass in shape data, load vertex and fragment shaders, set up our transformation matrix, and finally put it all together. If all goes well, you should see something like the screenshot below.

There is a lot to digest in this lesson, and you may need to read these steps several times to understand it. OpenGL ES 2 requires more setup to get started, but once you’ve gone through the process a few times, you’ll remember the process.

Launch on the Android market

When developing apps we don’t want to see them on the market in people who can’t run them, otherwise we might get a lot of bad reviews and ratings when apps crash on their devices. To prevent the OpenGL ES 2 application from appearing on devices that do not support it, you can add to the manifest file:

<uses-feature
    android:glEsVersion="0x00020000"
    android:required="true" />
Copy the code

This tells the market that your app needs to have OpenGL ES 2 support, and devices that don’t will hide your app.

Further exploration

Try changing the animation speed, vertices or colors and see what happens! This course is available on Github for download and is available on Google Play for download

Tutorial directory

  • OpenGL Android Course 1: Getting Started
  • OpenGL Android Course 2: Ambient and Diffuse Light
  • OpenGL Android Lesson 3: Use each piece of lighting
  • OpenGL Android Course 4: Introduction to texture basics
  • OpenGL: An Introduction to Blending
  • OpenGL Android Course 6: Introduction to texture filtering