Reading this article requires some basic knowledge of Cesium. Cesium-related basic API will not be introduced too much. This article only introduces the drawing algorithm of anti-aliasing pixel line in Cesium.

preface

Cesium provides quite a few types of line segment drawing, such as pixel line, meter level line, dotted line, arrow line, etc.

The arrows above can basically meet most business scenarios, but some business scenarios cannot be met for the following reasons:

  1. Their appearance is inconsistent, and you can’t create a Primitive, so rendering performance isn’t optimized
  2. Cesium line segment itself is not anti-aliased, rendering effect is not ideal
  3. The arrow line style is not suitable for most map business scenarios
  4. Cesium line segment does not support Outline
  5. The dashed line part of the business scene does not need to move as you drag the map

As a result, you have to write custom Primitives for some map projects, such as the following business scenario, the direction arrow segment in MapBox:

To achieve the above effects, we first disassemble the requirements:

  1. Pixel width line segment, and supports anti-aliasing optimization.
  2. Line segments in the arrow texture map, and need to keep the arrow direct spacing varying with the level

The “line” layer only renders the line segment, while the “text” or “icon” layer is drawn in the “Symbol” layer. The position and orientation of individual symbols are calculated in real time as zoomLevel changes, and this effect is achieved by changing the atrribute passed into the shader.

But in Cesium we needed arrows and line segments to share a single appearance, so that the geometry could be batched directly and merged into a Primitive rendering, which would improve performance many times in real projects. Based on this premise, this paper first realized the anti-aliased pixel line segment in Cesium, and then explained the realization of road arrow in the next chapter.

The following content requires some webGL knowledge, if you do not understand shader syntax support, it is recommended to learn shader syntax first. In addition, the shader code snippet in the article only lists the key algorithm content.

Realization principle of pixel line segment algorithm

At present, there are a lot of online algorithm implementation articles on the realization of actual pixel line segment, including the open source library THREE.MeshLine implementation is also very good, the algorithm implementation in this paper is consistent with it, only to explain its implementation ideas in detail, for themselves is also a review.

The core idea of pixel line drawing is to use triangles to draw lines. A line with pixel width is regarded as the result of stitching multiple triangles, as shown below:

According to the path data P0,P1 and P2, we can divide them into two groups, P0,P1,P1 and P2. From the unit vectors P0P1 and P1P2, we can get the upper and lower unit normal vectors respectively. In this way, the unit normal vector can be multiplied by the scalar representing the line width to get the two points of its expansion. P0A,P0B,P1A,P1B, and P1C,P1D,P2A, and P2B make up four triangles. However, it is obvious that 4 points will be generated near P1. If four triangles are drawn, the corner part will definitely have a vacancy. Such rendering effect is obviously not in line with business requirements, so the situation at P1 (the middle point) also needs special treatment.

Firstly, the unit vectors of current-last and next-current are obtained respectively, and the Avg.Normal unit vector as shown in the figure can be obtained by adding the two vectors. The upper and lower unit Normal vectors of this vector can be easily obtained. You take these two normal vectors and you get to two points around the corner.

One thing to note here is that the corner unit normal vector is now known, but the scalar of the width that needs to be multiplied is not directly based on the line width. If you look at the picture here, you can see that the width is longer than the line width. This is conveniently calculated using the dot product, as shown in the figure below. If we know normal1 and normal2, take their dot product and get the cosine of the Angle between them, that is, d/d2 = cosine. If we know d is also the line width, d2 can be easily solved.

So the principle of the pixel line algorithm is explained here, know how to solve the method behind the code is a natural thing.

attribute vec3 position3DHigh;
attribute vec3 position3DLow;
attribute vec3 nextPosition3DHigh;
attribute vec3 nextPosition3DLow;
attribute vec3 prePosition3DHigh;
attribute vec3 prePosition3DLow;

void main() {
  // Coordinates relative to the camera
  vec4 position = czm_computePosition();
  vec4 previous = czm_computePrePosition();
  vec4 next = czm_computeNextPosition();
  
  // Screen coordinates
  vec4 positionWindow = czm_eyeToWindowCoordinates(positionEC);
  vec4 previousWindow = czm_eyeToWindowCoordinates(prevEC);
  vec4 nextWindow = czm_eyeToWindowCoordinates(nextEC);
  
  // prev
  vec2 directionToPrevWC = positionWindow.xy - previousWindow.xy;
  // next
  vec2 directionToNextWC = nextWindow.xy - positionWindow.xy;
  
  vec2 normalDirection = vec2(0.0.0.0);
  float meterScaler = width;
  if(abs(positionWindow.x - previousWindow.x) == 0. && abs(positionWindow.y - previousWindow.y) == 0.) { // First point
     directionToPrevWC = normalize(directionToPrevWC); // get the unit vector
     normalDirection = vec2(-directionToPrevWC.y, directionToPrevWC.x); // Take the normal vector
  } else if (abs(positionWindow.x - nextWindow.x) == 0. && abs(positionWindow.y - nextWindow.y) == 0.) { // Last point
     directionToNextWC = normalize(directionToNextWC);
     normalDirection = vec2(-directionToNextWC.y, directionToNextWC.x);
  } else { / / intermediate point
     directionToNextWC = normalize(directionToNextWC); // next direction vector
     directionToPrevWC = normalize(directionToPrevWC); // Last direction vector
     vec2 lastNormal = vec2(-directionToPrevWC.y, directionToPrevWC.x); // Take the last normal vector
     vec2 normalAvg = directionToPrevWC + directionToNextWC; / / Avg vector
     normalDirection = normalize(vec2(-normalAvg.y, normalAvg.x)); // Corner normal vector
     meterScaler = width / dot(normalDirection, lastNormal); // take the corner length
  }
  
   vec2 offset = normalDirection * direction * meterScaler * czm_pixelRatio * positionWindow.w;
   vec2 windowCoords = positionWindow.xy + offset;
   vec4 translatePosition = vec4(windowCoords.x, windowCoords.y, -positionWindow.z, positionWindow.w);
   gl_Position = czm_viewportOrthographic * translatePosition;
}
Copy the code

Thus pixel width line segments can be drawn, as shown below:

Realization of anti-aliasing of pixel line segment

However, we can see from the above image that the serrations of the line segment are very serious, and the rendering effect is not ideal, so we need to continue to deal with the serrations. At the beginning of this requirement, I first referred to the solution in this blog: Draw a line in WebGL, the core part of which is the following shader code:

varying vec2 v_normal; // The normal vector for each vertex

float blur = 1. - smoothstep(0.98.1..length(v_normal)); // According to the length of the normal vector, it can be directly smoothed and blurred from 0.98 to 1.0 (the defect of this algorithm is obvious when the pixel line width is small)
gl_FragColor = v_color;
gl_FragColor.a *= blur;
Copy the code

So let’s see what happens

The effect is better than the above, but there is still a bit of aliasing, and the anti-aliasing effect is not obvious when the pixel width is very small, which is also the defect of the above algorithm, as shown in the figure below:

So this scheme was obviously passed. After reading the source code of various excellent open source projects, I found that there are two main anti-aliasing schemes for line segments. One is to increase the number of vertices, which is also mentioned in the blog above, which is mapBox’s early scheme. The second scheme relies on normal vector for fuzzy processing, which is the scheme of today’s Mapbox. After repeated consideration and comparison, the anti-aliasing scheme of today’s Mapbox is finally selected. First, it does not need to generate too many vertices, which has less impact on performance, and second, it is easier to modify on the current code. Next, we will introduce the anti-aliasing scheme of this line segment:

varying vec2 v_normal; // The normal vector for each vertex

float dist = length(v_normal) * lineHalfWidth; // Calculate the distance in pixels from the line
float blur = clamp(lineHalfWidth - dist, 0..1.); The closer the pixel is to the middle of the line, the smaller dist is. Therefore, the larger the blur is, the closer the pixel is to the edge of the line, the larger dist is, the smaller the blur is
gl_FragColor = v_color;
gl_FragColor.a *= blur;
Copy the code

Try optimizing the render for anti-aliasing, as shown below, and you can see that there is a very good anti-aliasing regardless of the thickness of the line segment.

conclusion

So far, we have been able to achieve excellent performance of anti-aliased pixel width line segment in Cesium. It can be found that in fact, most of the seemingly complicated requirements in WebGL are actually realized by several sections of algorithm mathematical formula. It is true that learning mathematics well gets twice the result with half the effort.

There will be time for more on implementing outline, dotted line, and direction line segments later in the tutorial. With this knowledge, there is no need to be afraid of the need for line segments in WebGL.