Hi, my name is Kenney, a programmer. In my previous article “OpenGL 3D Rendering technology: glTF Basics”, I introduced you to glTF model format. Today, in this article, we will render glTF model.

GlTF model is quite complex, in fact, the last article only introduced some of the glTF model’s more commonly used fields, if you want to render glTF model all the effects, it is quite complex, this article will take you to look at some basic rendering.

The SAMPLE glTF model can be downloaded here: github.com/KhronosGrou…

We’ll take a simple model, BoxTextured, which has a simple effect. A textured cube looks like this:

Let’s take a look at its configuration:

{
    "asset": {
        "generator": "COLLADA2GLTF"."version": "2.0"
    },
    "scene": 0."scenes": [{"nodes": [
                0]}],"nodes": [{"children": [
                1]."matrix": [
                1.0.0.0.0.0.0.0.0.0.0.0.1.0.0.0.0.0.1.0.0.0.0.0.0.0.0.0.0.0.1.0] {},"mesh": 0}]."meshes": [{"primitives": [{"attributes": {
                        "NORMAL": 1."POSITION": 2."TEXCOORD_0": 3
                    },
                    "indices": 0."mode": 4."material": 0}]."name": "Mesh"}]."accessors": [{"bufferView": 0."byteOffset": 0."componentType": 5123."count": 36."max": [
                23]."min": [
                0]."type": "SCALAR"
        },
        {
            "bufferView": 1."byteOffset": 0."componentType": 5126."count": 24."max": [
                1.0.1.0.1.0]."min": [
                1.0.1.0.1.0]."type": "VEC3"
        },
        {
            "bufferView": 1."byteOffset": 288."componentType": 5126."count": 24."max": [
                0.5.0.5.0.5]."min": [
                0.5.0.5.0.5]."type": "VEC3"
        },
        {
            "bufferView": 2."byteOffset": 0."componentType": 5126."count": 24."max": [
                6.0.1.0]."min": [
                0.0.0.0]."type": "VEC2"}]."materials": [{"pbrMetallicRoughness": {
                "baseColorTexture": {
                    "index": 0
                },
                "metallicFactor": 0.0
            },
            "name": "Texture"}]."textures": [{"sampler": 0."source": 0}]."images": [{"uri": "CesiumLogoFlat.png"}]."samplers": [{"magFilter": 9729."minFilter": 9986."wrapS": 10497."wrapT": 10497}]."bufferViews": [{"buffer": 0."byteOffset": 768."byteLength": 72."target": 34963
        },
        {
            "buffer": 0."byteOffset": 0."byteLength": 576."byteStride": 12."target": 34962
        },
        {
            "buffer": 0."byteOffset": 576."byteLength": 192."byteStride": 8."target": 34962}]."buffers": [{"byteLength": 840."uri": "BoxTextured0.bin"}}]Copy the code

Let’s create a small 3D Engine to render it. The class names and functions are the same as those in glTF. There are Engine, Scene, Node, Mesh, Primitive, Material.

GlTF model analysis

GlTF model parsing there are some open source libraries available, here we use tinyGLTF library, this library is very good to use, very small, easy access, let’s take a look at the parsing code:

void Engine::loadGLTF(const std::string &path) {
  tinygltf::TinyGLTF loader;
  std::string err;
  std::string warn;
  loader.LoadASCIIFromFile(&model_, &err, &warn, path);
}
Copy the code

The tinyGLTF ::Model member variable has the same name and level as glTF member variable. It is very friendly to use, and also includes bin data reading, image data reading, not only field parsing, even data to help you read.

The data load

Once we have the Model, we will create alternate GL resources based on the data in the model.

Let’s start with buffer:

"buffers": [{"byteLength": 840."uri": "BoxTextured0.bin"}]Copy the code

In this cube model, there is only one buffer, let’s load it into GL buffer, also known as VBO:

std::shared_ptr<std::vector<GLuint>>
Engine::buildBuffers(const tinygltf::Model &model) {
  auto buffers = std::make_shared<std::vector<GLuint>>(model.buffers.size(), 0);
  GL_CHECK(glGenBuffers(buffers->size(), buffers->data()));
  for (auto i = 0; i < model.buffers.size(a); ++i) {GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, buffers->at(i)));
    GL_CHECK(glBufferData(GL_ARRAY_BUFFER, model.buffers[i].data.size(),
                          model.buffers[i].data.data(), GL_STATIC_DRAW));
  }
  GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, 0));
  return buffers;
}
Copy the code

Here vBO contains all the numerical data of the glTF model, including vertices, texture coordinates, normal vectors, indexes and so on.

Next we load the texture:

std::shared_ptr<std::vector<GLuint>>
Engine::buildTextures(const tinygltf::Model &model) {
  auto textures = std::make_shared<std::vector<GLuint>>(model.textures.size());
  GL_CHECK(glGenTextures(textures->size(), textures->data()));
  for (auto i = 0; i < textures->size(a); ++i) {GL_CHECK(glBindTexture(GL_TEXTURE_2D, textures->at(i)));
    const auto &texture = model.textures[i];
    const auto &image = model.images[texture.source];
    auto minFilter =
        texture.sampler >= 0&& model.samplers[texture.sampler].minFilter ! =- 1
            ? model.samplers[texture.sampler].minFilter
            : GL_LINEAR;
    auto magFilter =
        texture.sampler >= 0&& model.samplers[texture.sampler].magFilter ! =- 1
            ? model.samplers[texture.sampler].magFilter
            : GL_LINEAR;
    auto wrapS = texture.sampler >= 0 ? model.samplers[texture.sampler].wrapS
                                      : GL_REPEAT;
    auto wrapT = texture.sampler >= 0 ? model.samplers[texture.sampler].wrapT
                                      : GL_REPEAT;
    GL_CHECK(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image.width, image.height,
                          0, GL_RGBA, image.pixel_type, image.image.data()));
    GL_CHECK(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter));
    GL_CHECK(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter));
    GL_CHECK(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapS));
    GL_CHECK(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapT));
    if (minFilter == GL_NEAREST_MIPMAP_NEAREST ||
        minFilter == GL_NEAREST_MIPMAP_LINEAR ||
        minFilter == GL_LINEAR_MIPMAP_NEAREST ||
        minFilter == GL_LINEAR_MIPMAP_LINEAR) {
      GL_CHECK(glGenerateMipmap(GL_TEXTURE_2D)); }}GL_CHECK(glBindTexture(GL_TEXTURE_2D, 0));
  return textures;
}
Copy the code

Note that some fields in glTF can be unmatched, and tinyglTF does not give you default values if they are unmatched. For example, minFilter:

Type Description Required
magFilter integer Magnification filter. No
minFilter integer Minification filter. No
wrapS integer s wrapping mode. No, default: 10497
wrapT integer t wrapping mode. No, default: 10497

See also: github.com/KhronosGrou…

Tinygltf library notes:

// glTF 2.0 spec does not define default value for `minFilter` and
// `magFilter`. Set -1 in TinyGLTF(issue #186)
int minFilter =
    - 1;  // optional. -1 = no filter defined. ["NEAREST", "LINEAR",
         // "NEAREST_MIPMAP_LINEAR", "LINEAR_MIPMAP_NEAREST",
         // "NEAREST_MIPMAP_LINEAR", "LINEAR_MIPMAP_LINEAR"]
Copy the code

So you need to deal with unmatched situations and set your own defaults, otherwise the texture may not render.

Our numeric data and texture data are now loaded.

Create a scene

Let’s create a Scene. The Scene contains a Node and the Node has a Mesh. The Mesh can have several parts, and each part is a Primitive.

Create all scenarios:

void Engine::buildScenes(a) {
  auto buffers = buildBuffers(model_);
  auto textures = buildTextures(model_);
  scenes_.resize(model_.scenes.size());
  for (auto i = 0; i < model_.scenes.size(a); ++i) { scenes_[i] =buildScene(model_, i, buffers, textures); }}std::shared_ptr<Scene>
Engine::buildScene(const tinygltf::Model &model, unsigned int sceneIndex,
                   const std::shared_ptr<std::vector<GLuint>> &buffers,
                   const std::shared_ptr<std::vector<GLuint>> &textures) {
  auto scene = std::make_shared<triangle::Scene>();
  for (auto i = 0; i < model.scenes[sceneIndex].nodes.size(a); ++i) { scene->addNode(
        buildNode(model, model.scenes[sceneIndex].nodes[i], buffers, textures));
  }
  return scene;
}
Copy the code

Node creation:

std::shared_ptr<Node>
Engine::buildNode(const tinygltf::Model &model, unsigned int nodeIndex,
                  const std::shared_ptr<std::vector<GLuint>> &buffers,
                  const std::shared_ptr<std::vector<GLuint>> &textures,
                  std::shared_ptr<Node> parent) {
  auto node = std::make_shared<Node>(parent);
  auto nodeMatrix = model.nodes[nodeIndex].matrix;
  glm::mat4 matrix(1.0 f);
  if (nodeMatrix.size() = =16) {
    matrix[0].x = nodeMatrix[0], matrix[0].y = nodeMatrix[1],
    matrix[0].z = nodeMatrix[2], matrix[0].w = nodeMatrix[3];
    matrix[1].x = nodeMatrix[4], matrix[1].y = nodeMatrix[5],
    matrix[1].z = nodeMatrix[6], matrix[1].w = nodeMatrix[7];
    matrix[2].x = nodeMatrix[8], matrix[2].y = nodeMatrix[9],
    matrix[2].z = nodeMatrix[10], matrix[2].w = nodeMatrix[11];
    matrix[3].x = nodeMatrix[12], matrix[3].y = nodeMatrix[13],
    matrix[3].z = nodeMatrix[14], matrix[3].w = nodeMatrix[15];
  } else {
    if (model.nodes[nodeIndex].translation.size() = =3) {
      glm::translate(matrix, glm::vec3(model.nodes[nodeIndex].translation[0],
                                       model.nodes[nodeIndex].translation[1],
                                       model.nodes[nodeIndex].translation[2]));
    }
    if (model.nodes[nodeIndex].rotation.size() = =4) {
      matrix *= glm::mat4_cast(glm::quat(model.nodes[nodeIndex].rotation[3],
                                         model.nodes[nodeIndex].rotation[0],
                                         model.nodes[nodeIndex].rotation[1],
                                         model.nodes[nodeIndex].rotation[2]));
    }
    if (model.nodes[nodeIndex].scale.size() = =3) {
      glm::scale(matrix, glm::vec3(model.nodes[nodeIndex].scale[0],
                                   model.nodes[nodeIndex].scale[1],
                                   model.nodes[nodeIndex].scale[2]));
    }
  }
  node->setMatrix(matrix);
  if (model.nodes[nodeIndex].mesh >= 0) {
    node->setMesh(
        buildMesh(model, model.nodes[nodeIndex].mesh, buffers, textures));
  }
  for (auto &childNodeIndex : model.nodes[nodeIndex].children) {
    node->addChild(buildNode(model, childNodeIndex, buffers, textures, node));
  }
  return node;
}
Copy the code

Note that the transformation of nodes can be given by means of matrix, translation, rotation, and scale:

Any node can define a local space transformation either by supplying a matrix property, or any of translation, rotation, and scale properties (also known as TRS properties). translation and scale are FLOAT_VEC3 values in the local coordinate system. rotation is a FLOAT_VEC4 unit quaternion value, (x, y, z, w), in the local coordinate system.

See also: github.com/KhronosGrou…

In addition, notice that when given in translation, rotation, and scale, the transformation order is TRS. In shader, the matrix is multiplied by the vertex to the left. That is to say, scale transformation is performed first, then rotation transformation, then translation transformation.

A node can contain mesh, Material, or an empty node. Check whether the node is configured. We then continue to recursively create the child nodes.

Here’s how to create a Mesh:

As we can see, Mesh is made up of Primitive:

"meshes": [{"primitives": [{"attributes": {
          "NORMAL": 1."POSITION": 2."TEXCOORD_0": 3
        },
        "indices": 0."mode": 4."material": 0}]."name": "Mesh"}]Copy the code

What is this Primitive? Let’s take a look at the glTF documentation:

In glTF, meshes are defined as arrays of primitives. Primitives correspond to the data required for GPU draw calls. Primitives specify one or more attributes, corresponding to the vertex attributes used in the draw calls. Indexed primitives also define an indices property. Attributes and indices are defined as references to accessors containing corresponding data. Each primitive also specifies a material and a primitive type that corresponds to the GPU primitive type (e.g., triangle set).

See also: github.com/KhronosGrou…

It can be thought of as a component of a Mesh, that is, a large Mesh can be divided into several sub-parts, and each part can be used as a unit of a draw call, but it can also be separated. For example, in this model, there is only one Primitive. Primitive describes the composition of attributes and specifies the source of these attributes through indices, which are the indexes of Accessor.

std::shared_ptr<Mesh>
Engine::buildMesh(const tinygltf::Model &model, unsigned int meshIndex,
                  const std::shared_ptr<std::vector<GLuint>> &buffers,
                  const std::shared_ptr<std::vector<GLuint>> &textures) {
  auto meshPrimitives =
      std::make_shared<std::vector<std::shared_ptr<Primitive>>>();
  const auto &primitives = model.meshes[meshIndex].primitives;
  auto vaos = std::make_shared<std::vector<GLuint>>(primitives.size());
  GL_CHECK(glGenVertexArrays(vaos->size(), vaos->data()));
  for (auto i = 0; i < primitives.size(a); ++i) {GL_CHECK(glBindVertexArray(vaos->at(i)));
    meshPrimitives->push_back(
        buildPrimitive(model, meshIndex, i, vaos, buffers, textures));
  }
  GL_CHECK(glBindVertexArray(0));
  return std::make_shared<Mesh>(meshPrimitives);
}

std::shared_ptr<Primitive>
Engine::buildPrimitive(const tinygltf::Model &model, unsigned int meshIndex,
                       unsigned int primitiveIndex,
                       const std::shared_ptr<std::vector<GLuint>> &vaos,
                       const std::shared_ptr<std::vector<GLuint>> &buffers,
                       const std::shared_ptr<std::vector<GLuint>> &textures) {
  const auto &primitive = model.meshes[meshIndex].primitives[primitiveIndex];
  for (auto &attribute : preDefinedAttributes) {
    const auto &attributeName = attribute.first;
    const auto &attributeLocation = attribute.second;
    const auto iterator = primitive.attributes.find(attributeName);
    if (iterator == primitive.attributes.end()) {
      continue;
    }
    const auto &accessor = model.accessors[(*iterator).second];
    const auto &bufferView = model.bufferViews[accessor.bufferView];
    const auto bufferIdx = bufferView.buffer;

    GL_CHECK(glEnableVertexAttribArray(attributeLocation));
    GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, buffers->at(bufferIdx)));

    const auto byteOffset = accessor.byteOffset + bufferView.byteOffset;
    GL_CHECK(glVertexAttribPointer(
        attributeLocation, accessor.type, accessor.componentType, GL_FALSE,
        bufferView.byteStride, (const GLvoid *)byteOffset));
  }
  std::shared_ptr<Primitive> meshPrimitive;
  if (primitive.indices >= 0) {
    const auto &accessor = model.accessors[primitive.indices];
    const auto &bufferView = model.bufferViews[accessor.bufferView];
    const auto bufferIndex = bufferView.buffer;
    GL_CHECK(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers->at(bufferIndex)));
    meshPrimitive = std::make_shared<Primitive>(
        vaos->at(primitiveIndex), primitive.mode, accessor.count,
        accessor.componentType, accessor.byteOffset + bufferView.byteOffset);
  } else {
    const auto accessorIndex = (*begin(primitive.attributes)).second;
    const auto &accessor = model.accessors[accessorIndex];
    meshPrimitive =
        std::make_shared<Primitive>(vaos->at(primitiveIndex), primitive.mode,
                                    accessor.count, accessor.componentType);
  }
  meshPrimitive->setMaterial(
      buildMaterial(model, primitive.material, textures));
  return meshPrimitive;
}
Copy the code

Accessor is used to obtain the access mode of different parts of the buffer data block, and then the buffer view is used to determine the corresponding data of the field. For example:

// accessor 0
{
  "bufferView": 0."byteOffset": 0."componentType": 5123."count": 36."max": [
    23]."min": [
    0]."type": "SCALAR"
}

// buffer view 0
{
  "buffer": 0."byteOffset": 768."byteLength": 72."target": 34963
}
Copy the code

Here, Accessor 0 points to Buffer View 0, which specifies an offset of 0 bytes, and Buffer View 0 specifies an offset of 768 bytes. The resulting offset from the data block is 768 bytes when the two are added together. GL_UNSIGNED_SHORT for accessor 0, GL_UNSIGNED_SHORT for accessor 0, GL_UNSIGNED_SHORT for accessor 0, GL_UNSIGNED_SHORT for accessor 0, GL_UNSIGNED_SHORT for accessor 0, GL_UNSIGNED_SHORT for accessor 0, GL_UNSIGNED_SHORT for accessor 0 Thirty-six is exactly 72 bytes, so that corresponds. The target value 34963 corresponds to GL_ELEMENT_ARRAY_BUFFER, which is also a vertex index, so accessor 0 actually provides an array of vertex indexes.

You can print it out to verify:

std::shared_ptr<std::vector<GLuint>> buildBuffers(const tinygltf::Model &model)
{
  auto buffers = std::make_shared<std::vector<GLuint>>(model.buffers.size(), 0);
  GL_CHECK(glGenBuffers(buffers->size(), buffers->data()));
  for (auto i = 0; i < model.buffers.size(a); ++i) {GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, buffers->at(i)));
    GL_CHECK(glBufferData(GL_ARRAY_BUFFER, model.buffers[i].data.size(),
        model.buffers[i].data.data(), GL_STATIC_DRAW));
  }
  for (int i = 768; i < 768 + 72; i += 2) {
    unsigned int index;
    memcpy(&index, model.buffers[0].data.data() + i, 2);
    // print
  }
  GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, 0));
  return buffers;
}
Copy the code
index 0 = 0
index 1 = 1
index 2 = 2
index 3 = 3
index 4 = 2
index 5 = 1
index 6 = 4
index 7 = 5
index 8 = 6
index 9 = 7
index 10 = 6
index 11 = 5
index 12 = 8
index 13 = 9
index 14 = 10
index 15 = 11
index 16 = 10
index 17 = 9
index 18 = 12
index 19 = 13
index 20 = 14
index 21 = 15
index 22 = 14
index 23 = 13
index 24 = 16
index 25 = 17
index 26 = 18
index 27 = 19
index 28 = 18
index 29 = 17
index 30 = 20
index 31 = 21
index 32 = 22
index 33 = 23
index 34 = 22
index 35 = 21
Copy the code

It can be seen that the 36 index numbers, ranging from 0 to 23, are 24 in total. The cube has 6 sides, and each side has 2 triangles. Each triangle has 3 vertex indexes

Let’s look at attributes, for Primitive attributes:

"attributes": {
  "NORMAL": 1."POSITION": 2."TEXCOORD_0": 3
}
Copy the code

Indices, Accessor indices, Accessor indices, Accessor indices For attributes, the data type, stride, and offset are specified in glVertexAttribPointer for each attribute, and vAO is used. The glVertexAttribPointer configuration of all attributes can be recorded at one time using vAO. Otherwise, the glVertexAttribPointer configuration of all attributes needs to be reconfigured before rendering. Is more troublesome.

Now that we’ve created Primitive, we’ve created a shape, we’ve created a Material, we’ve created a Material that contains the texture information:

std::shared_ptr<Material>
Engine::buildMaterial(const tinygltf::Model &model, unsigned int materialIndex,
                      const std::shared_ptr<std::vector<GLuint>> &textures) {
  auto baseColorIndex = model.materials[materialIndex]
                            .pbrMetallicRoughness.baseColorTexture.index;
  auto baseColorTexture =
      (baseColorIndex >= 0 ? textures->at(baseColorIndex)
                           : buildDefaultBaseColorTexture(model));
  const auto baseColorTextureLocation = GL_CHECK(
      glGetUniformLocation(program_->getProgram(), UNIFORM_BASE_COLOR_TEXTURE));
  return std::make_shared<Material>(baseColorTexture, baseColorTextureLocation);
}
Copy the code

Material in glTF is more complex, see github.com/KhronosGrou…

The base Color Texture is also optional. Here we will create a white texture as the default color.

Now we have all the data we need before rendering.

Apply colours to a drawing

Let’s start with Shader:

// vertex shader
#version 300 es
precision mediump float;
layout(location = 0) in vec4 a_position;
layout(location = 1) in vec2 a_normal;
layout(location = 2) in vec2 a_texCoord0;
out vec2 v_texCoord0;
uniform mat4 u_modelViewProjectMatrix;
void main(a) {
    gl_Position = u_modelViewProjectMatrix * a_position;
    v_texCoord0 = a_texCoord0;
}

// fragment shader
#version 300 es
precision mediump float;
in vec2 v_texCoord0;
out vec4 outColor;
uniform sampler2D u_baseColorTexture;
void main(a) {
    outColor = texture(u_baseColorTexture, v_texCoord0);
}
Copy the code

Fragment Shader: Fragment shader: Fragment Shader: Fragment Shader: Fragment Shader: fragment Shader: fragment shader: fragment shader: fragment shader: fragment shader: fragment shader: fragment shader: fragment shader: fragment shader: fragment shader

Primitive is a draw call unit, so we do a draw call for Primitive:

void Primitive::draw(a) {
  material_->bind(a);GL_CHECK(glBindVertexArray(vao_));
  if (offset_ >= 0) {
    GL_CHECK(glDrawElements(mode_, count_, componentType_, (void *)offset_));
  } else {
    GL_CHECK(glDrawArrays(mode_, 0, count_));
  }
  GL_CHECK(glBindVertexArray(0));
}
Copy the code

According to the glTF configuration, glDrawElements are used if vertex indexing is used, otherwise glDrawArrays are used.

The index of the accessor that contains mesh indices. When this is not defined, the primitives should be rendered without indices using drawArrays().

See also: github.com/KhronosGrou…

Let’s go through all nodes for rendering, bind program in Engine, and enable deep testing.

Compute the Model-view-project matrix in Node traversal. Note here that the Node transformation matrix is multiplied all the way to the root Node.

Node’s draw() will eventually move to the Primitive rendering.

void Engine::drawFrame(a) {
  if(! initialized_) {init(a); initialized_ =true;
  }
  program_->bind(a);GL_CHECK(glEnable(GL_DEPTH_TEST));
  GL_CHECK(glViewport(0.0, width, height));
  GL_CHECK(glClearColor(0.0 f.0.0 f.0.0 f.0.0 f));
  GL_CHECK(glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT));
  for (auto &scene : scenes_) {
    scene->traverse([&] (const std::shared_ptr<Node> &node) {
      auto modelViewProjectMatrix = camera_->getProjectMatrix() *
                                    camera_->getViewMatrix() *
                                    node->getWorldMatrix(a);auto location = GL_CHECK(glGetUniformLocation(
          program_->getProgram(), UNIFORM_MODEL_VIEW_PROJECT_MATRIX));
      GL_CHECK(glUniformMatrix4fv(location, 1.false,
                                  glm::value_ptr(modelViewProjectMatrix)));
      node->draw();
    });
  }
}
Copy the code

For simplicity we use a custom Camera, but glTF can also define a Camera. We let the Camera move in a circle and rotate around the observation point (0,0,0), which is the observation effect around the glTF model:

extern "C" JNIEXPORT void
JNICALL Java_io_github_kenneycode_triangle_example_MainActivity_drawFrame(JNIEnv *env, jobject /* this */, jint width, jint height, jlong timestamp) {
    if (engine == nullptr) {
        engine = std::make_shared<triangle::Engine>(width, height);
        camera = std::make_shared<triangle::Camera>(glm::vec3(0.0 f.0.0 f, R), glm::vec3(0.0 f.0.0 f.0.0 f),
                                                    glm::vec3(0.0 f.1.0 f.0.0 f), 70.0 f.float(width) / height, 1.0 f.10.0 f);
        engine->setDefaultCamera(camera);
        engine->loadGLTF("/sdcard/Duck/glTF/Duck.gltf");
    }
    auto theta = -timestamp % 100000 / 1000.0 f;
    x = R * cos(theta);
    z = R * sin(theta);
    camera->setPosition(glm::vec3(x, 0.0 f, z));
    engine->drawFrame(a); }Copy the code

Here are some of the model effects, the first being the glTF posted at the beginning of this article. Since only the Base Color texture is used, it has no PBR effect, no lighting, and the effect is quite rough:

Thanks for reading! If you have any questions, feel free to share them in the comments section

The code is on my Github: github.com/kenneycode/…