Introduction to the

Global Illumination (GI) strictly refers to the Illumination effect under primary light + complete indirect light. In fact, any Illumination that reflects indirect light effect (even partial Illumination) can be called GI. Although the algorithm in this paper is based on the Angle of ambient light, its category can still be considered as GI method.

GI is generally divided into two groups:

  • One is to calculate the GI of any object through a standard rendering process, which is often computationally intensive but more physical and realistic. A typical example is offline ray tracing
  • The other is to treat GI effects as parts, so that we can choose a combination of GI effects to adapt to different performance (usually relatively low computation, especially for real-time rendering) while still delivering acceptable GI effects (although often not strictly physical correct). Typical examples are the approach described in this blog and other fully dynamic GI methods (such as RSM, SSAO, SSRT)

Image Based Lighting (IBL)

Image-based lighting (IBL), simply speaking, is a kind of physics-based object rendering method that saves the environmental light information of an object through Environment Map.

  • The IBL is available for diffuse/ Glossy /specular rendering (which includes almost all objects) : Depending on the ambient map, the higher the resolution of the ambient map, the more high-frequency information is available for rendering specular objects; The lower the resolution, the more low-frequency information is represented, the diffuse objects are rendered adequately and save some storage space.
  • IBL can be implemented as dynamic environment lighting: rendering dynamic environment maps in real time. It has some overhead and is generally only used for a small number of Specular objects.
  • IBL can be implemented as static ambient lighting: pre-rendered ambient maps. You need the environment to be static.

Static ambient lighting with IBL is similar to Light Map. The difference is that IBL stores static ambient Light information, while Light Map stores the coloring result after receiving static ambient Light information.

The Split Sum Approximation

What about IBL through ambient light mapping? Essentially, it is still a problem to solve the shading equation of the shading point, where all the illumination information Li is given through the environment map, and the shading relation is not considered.

The most common algorithm in IBL is one based on The Split Sum Approximation.Glossy/Specular BRDF for the Diffuse object (left) and BRDF for the Diffuse object (right) :

  • If BRDF is glossy/specular, its Lobe tends to be petal shaped, meaning that only a small integral area receives ambient light.
  • If BRDF is diffuse, its Lobe tends to be uniformly hemispherical, meaning that the OUTPUT of the FR function varies little (or even constant) no matter which direction ambient light enters.

This reminds us of a classical approximation formulaTo make the approximation of this formula more accurate, the following needs to be metOne or twoCondition:

  • The integral domain ω G is small (corresponding to the Specular lobe is small)
  • G (x) is smooth, i.e. not changing very much (fr for Diffuse)

Therefore, the rendering equation can be divided into two parts for solving respectively (environmental light integral and BRDF integral), and the calculation of the two parts will be optimized respectively by using the predictive calculation method later.

  • The Split Sum method is almost identical to the original Monte Carlo method
  • The performance overhead is greatly reduced by eliminating the need to multisample the environment map

Calculate the environment map

Since we already have an ambient map (either live or pre-rendered) to store ambient light information, we need to take multiple ray samples in the ω FR range to calculate the integral for the ambient light portion. However, there is an almost equivalent approach that avoids multiple sampling at run time:After pre-filtering the texture (blurring), we only need to sample the light direction of the filtered environment map once to get the integral. Of course, the sharper the BRDF Lobe’s shape, the smaller the ambient light integral range, the less blurred the ambient map is needed; On the contrary, the thicker the BRDF Lobe shape is, the larger the ambient light integral range is, the higher the ambiguity of the ambient map should be used.

We can useMIPMAP technologyBy Trilinear Interportion, the ambient light filtering results of any ambiguity degree and any 2D position are obtained.

Pre-calculated BRDF

Let’s review the Microfacet BRDF components:

  • F(L, H): Fresnel Equation, describing the proportion of light reflected from a surface at different surface angles.
  • G(L, V, H): Geometric Function describing the shadowing property of the microplane, that is, the percentage of unshadowing surface points m = h.
  • D(h): Normal Distribution Function, which describes the probability of Normal Distribution of microplane elements, that is, the concentration of correctly oriented normals. That is, the concentration relative to the surface area of a surface point with the correct orientation that can reflect light from L to V.

As can be expected, the integral result of this BRDF depends on three parameters:

  • F0 (Fresnel term coefficient)
  • α (roughness)
  • θv (the Angle between the direction of reflection and the normal line actually determines θvhθvh, θhθh)

We can split the F0 term and make the BRDF integral into two integrals, but these integrals reduce one dependent parameterIn this way, for the same material (same BRDF), we can build one for the remaining two dependent parameters αα, θvθvTwo dimensional query table:

Predicted radiation transfer (PRT)

Precomputed Radiance Transfer (PRT) is a physics-based object rendering method based on Precomputed Radiance Transfer. The so-called radiation transmission can be understood as the shadow /AO of the object itself and the mutual reflection of the surface and other information related to the optical transmission. Therefore PRT is usually suitable for static objects/materials under dynamic lighting

  • PRT enables static ambient lighting: not only transfer is expected, but also the ambient portion (which can be called radiance) is expected. This allows for physics-based rendering at run time with lower performance overhead, although lighting dynamics are limited (but not necessarily static)
  • PRT can realize dynamic environment lighting: only transfer is expected

It should be added that, prior to PRT, one of the major challenges of real-time rendering was solving the problems of shadows and mutual reflection, not to mention the Caustics phenomenon. The main difficulty is that these physical phenomena require solving the integral (rendering equation) of incident light on the hemispheres of each point, which is really a difficult task to accomplish in real time. There are two core PRT algorithms

  • Base coding of ambient illumination, such as spherical harmonic illumination method;
  • Radiance, which maps incident light to radiated radiance containing shadows, reflections, etc., pre-calculated, stored and applied directly to radiated light during live rendering;

Spherical harmonic function (SH)

Any function can be expressed as a linear combination of a series of basis functions, the most typical example of which is in signal processingFourier transform, the Fourier transform expresses f(x) as a linear combination of another series of basis functions (sinusoidal harmonics of various frequencies) : 而 Spherical Harmonics (SH)Is a series of 2D basis functions defined on a sphere, which is somewhat similar to a 2D Fourier sequence, but is well suited to the spherical function f(ω) (That is, the parameters are unit spherical vectors).The figure above is a graphical display of the first n order of spherical harmonic function. The SH frequency of each order is the same, and the higher the class, the higher the frequency, and the more detailed information can be expressed. To completely restore an arbitrary function, we need SH of infinite order; If we only need to reconstruct the approximation of an arbitrary function (in other words, only the low-frequency information of the function), then we can simply use the first few orders of SH (including L =0, L =1… ,l=n all basis functions of every order).

Imagine what we learned in higher mathematics: Taylor’s formula is a way of approximating a function f(x) with the NTH derivative at x=x0 using a polynomial of the NTH degree with respect to (x-x0). The approximation of spherical harmonic functions to spherical functions can be thought of as the process of f(x) Taylor expansion.

The form of SH basis function is rather complicated, so it will not be described too much here. The following are some advantages of using SH as the basis function of spherical function:

  • Orthoonormal between the basis functions

  • SH coefficients can be easily obtained by Projection.

  • The spherical function can be easily reconstructed by the dot product of the coefficient vector set and the basis function set (there are n* N basis functions/coefficients of the first n order)

  • Product projection: C (ω)=a(ω)b(ω) product projection: C (ω)=a(ω)b(ω)

  • Support interpolation, the interpolation of SH coefficients is equivalent to the interpolation of reconstructed function values.
  • Rotational invariance, the rotation RSH of a function F is equivalent to the rotation R3D of the independent variable F (ω) :

Spheric harmonic illumination

The rendering equation is divided into two spherical functions (Lighting function and transfer function), and these spherical functions will be represented by SH method:

  • For the lighting section, you only need to store several SH coefficients up front instead of storing the entire environment map.
  • For the transfer function section, only a few SH coefficients (diffuse case) or matrix (Glossy case) are pre-stored, and the expected transfer rate can be self-shadoded, cross-reflective in real-time rendering at minimal cost.

However, SH Lighting generally uses order 3 SH, so the Lighting and transfer information represented by SH is low-frequency and only applies to diffuse and Glossy objects, not specular objects.

Diffuse Spherical harmonic light of objects

Diffuse objects render ideas:

Le stands for ambient light; ρ is the BRDF term; Vis Visibility(indicates the extent of not being covered, which is often manifested as the shadow phenomenon generated by self-covering).

  • Because the object is diffuse, its BRDF will be a constant ρ (the same BRDF value is available no matter which direction you look at it), and therefore the Diffuse’s Li will be a constant value, independent of the r parameter
  • L(ω) ≈∑liBi(ω)Le(ω)≈∑liBi(ω) Le(ω)≈∑liBi(ω)
  • 12. For the transfer function part T(ω)=V(ω) Max (0,n ω), it can also be expressed as SH function: T(ω)≈∑TjBj(ω)

Among them, THE SH coefficient vectors corresponding to Lighting part and Transfer part of Li and Ti can be obtained by simple vector dot product.

At this point, we can understand that the coefficient Li represents the information of lighting, while the coefficient Ti represents the information of light transfer at a certain vertex. The entire model is expected to be computedA vector of Lighting coefficientsandTransfer coefficient vector corresponding to the number of model vertices

Glossy Object ball shine

Glossy object ball harmonic illumination idea:Due to Glossy object BRDF is not only related to the incident Angle but also the observation Angle, the same point on the surface will have different illumination when viewed from different perspectives. Glossy BRDF will be a four-dimensional function (parameters include not only ω, but also R). If BRDF is included in transfer this time, an additional two-dimensional parameter will be added to the Transfer function. Li and TIj in the figure above are SH vector and matrix corresponding to Lighting and Transfer Function respectively. Estimated calculation process:

  1. For the whole environmental light information, it is expected to calculate the SH coefficient vector set of lighting L ⃗ L →, where one element is: Li =∫ ω +L(ω)Bi(ω)dω Li
  2. For each vertex, is expected to be light TT transfer matrix, one of the elements is: Tij = ∫ Ω + Bj (omega) Ti (omega) d omega, Ti (r) = ∫ Ω + T (r, omega) Bi (omega) d omega

Because Glossy objects need to store a matrix T per vertex, this makes space overhead slightly higher and is not very useful.

Light Probe

Light Probe is a kind of scene GI scheme. It sets several Probe points in the scene (which can be understood as points that detect light in all directions). After calculating the environmental light information for each Probe, it interpolates the Probe information around the object to obtain the environmental light received by the object at this time during operation.

In fact, PRT is also a PRObe-based GI solution: Each vertex on the PRT object is a Probe, and the ambient light information it detects is expected to be calculated as SH coefficient. Then the shading point can interpolate the gravity center of SH coefficient according to the vertex of the triangle to get the SH coefficient of the point (that is, it represents the ambient light). However, each triangle vertex of PRT is a Probe, and these probes combined to complete the GI effect of a single object; However, the general Light Probe scheme is to place a Probe every other space in the scene, and these probes together provide the GI effect of all objects in the scene.

However, Light probes often suffer from the wrong Bleeding problem: being affected by probes that should not have a geometric Light relationship, which is common in the inner and outer side of wall occlusion.

Light Map

Generally, Light Map records the shading results after the static part of the Ambient Light. The shading methods are not limited to Ray Tracing, radiation, Shadow, Ambient Occlusion (AO) and other algorithms.

The Light Map method actually divides the ambient Light into the static part and the dynamic part, and the Light Map records the coloring result after the static part of the ambient Light. Therefore, the object (position, shape, material) and the Light involved in baking should be static, which has great limitations

conclusion

Congge’s induction is really good!!

  • IBL is a semi-dynamic GI method (ambient light can be moved or static, dynamic object, material only roughness parameter variable), which is often used for rendering Specular objects
  • PRT is a semi-dynamic GI method (ambient light moving and still, rotating even with static ambient light, non-deformable objects, static materials) that is commonly used for Diffuse/Glossy objects rendering
  • Light Probe is a semi-dynamic GI method (static ambient Light, dynamic object) that provides appropriate static ambient Light information to any object in the scene in real time
  • Light Map is a fully static GI method (static ambient Light, static object), which provides the static object in the scene with the coloring result influenced by the static part of the ambient Light in advance

reference

  • Real-time Environment Lighting Based On Precomputation – KillerAery – CNblogs.com
  • GAMES202- High quality real-time rendering _ bilibili_bilibili
  • (17 messages) Introduction to PRT algorithm _ sub-wide column -CSDN blog