1) Questions about checking sRGB for textures 2) Increments in facets due to lighting on 3) UGUI Image changes material properties 4) How does the Uniwebview interface appear after Unity 5) Too much overhead for the Internal_CreatePlayable for Timeline


This is the 254th post of UWA technical knowledge sharing. Today, we continue to select a number of issues related to development and optimization for you. It is recommended that you read for 10 minutes.

UWA Q&A Community: Answer.uwa4d.com UWA QQ Group 2:793972859 (The group is full)

Rendering

Q: on the uaw trust search to a question and answer: https://answer.uwa4d.com/question/5bd1724fae74300ab0497bed, the conclusion is: Linear Space + Gamma Texture Linear Space + No Gamma Texture Linear Space + No Gamma Texture

The color space Settings in the Player Setting indicate whether the imported image was created in a Gamma space or in a Linear space.

According to the discussion here:https://forum.unity.com/threads/confusion-about-gamma-vs-linear.496053If the value from Tex2D is Pow (Origin_Color_Value, 2.2) and the gamma value is assumed to be 2.2, then the value of the R channel in Frag should be uncorrected. So the median value of U in the heatmap should be around 0.5, as shown in the first headmap in the discussion post above, but it is actually shown in the second heatmap.

If you want to use a color value for data instead of a color value, you can’t make a mistake by comparing numbers.

A1: The sRGB check below the Gamma Space does not matter.

In Linear Space, Unity does a Remove Gamma Correction if sRGB is checked. Then the gray level of the column with u=0.5 in the figure is 0.25 after Remove Gamma Correction, as shown in the figure below:



The X axis represents the intensity of light, and the Y axis represents the gray value

Y value of the blue line represents the gray value of human eyes, visual gray value; The red line represents the actual gray value of the physical space, proportional to the intensity of the light.

The gray value of the column U =0.5 in the Photo gradient image is 0.5, which is the visual gray value. After the Remove Gamma Correction, the actual gray value is 0.25. Participate in the Fragment Shader calculation. So the second heat map is displayed.

Thanks to the June@UWA Q&A community for the answers

A2: you can refer to this article: https://developer.nvidia.com/gpugems/gpugems3/part-iv-image-effects/chapter-24-importance-being-linear

Generally, the picture we see is in the “display space”, and the color of the picture we see in the monitor is normal, because the picture itself is through the GammaCorrection, that is, the original value is operated by the power of 1/2.2. For such a picture, we can understand that it is nonlinear. With the Linear space selected in Unity, for such non-linear images you need to check sRGB to render them correctly. After sRGB is ticked, the format in memory will change to ETC2_EAC_RGBA8_SRGB (assuming that the image chooses the compressed format of ETC2 8bits), as shown in the following figure:

SrGB unchecked, ETC2_RGB8_UNORM format, as shown below:

For the format of _SRGB suffix, the GPU will automatically remove the GamaCorrection when sampling the texture, i.e. the value will be operated to the power of 2.2, but the original data will not be modified. In the Gamma space, the format is ETC2_RGB8_UNORM, whether sRGB is checked or not.

To return to the main question: In the Gamma space, the rendering results are equal in four colors, because no GammaCorrection is removed in the Gamma space. This means that the data value of the “non-linear image” in the texture is 0.5 when u=0.5 (u is the texture UV coordinate). When it becomes a Linear space, data does not change. It is still when u=0.5, data=0.5. After sRGB is checked, the GPU manipulates the value to the power of 2.2 during texture sampling, which changes to 0.25 or so. According to Shader’s calculation, all colors are red when color.r<0.25, that is, all colors are red when u<=0.5, so half of the colors are red.

Thanks to the Xuan@UWA Q&A community for the answers

Rendering

Q: The number of faces displayed in Unity should be determined by all vertex data uploaded to the GPU in that frame, so why doesn’t the GPU cache this part? Because when lighting is turned on, the number of faces increases, which is equivalent to uploading the vertex data of the model for many times. If this part is cached, can’t you improve the performance a lot? If the GPU cannot do this caching, does it mean that the GPU’s cache is too small to be cached, or that the GPU is unable to determine what to cache? If you are changing the engine code, you can do caching for the project.

A: The shading of the light itself does not affect the number of DrawCall and faces. Live Shadowing requires rendering a shadow with a depth map, which is equivalent to rendering from a different perspective, with data different from the current camera. You can open the FrameDebugger to see the entire rendering process.

Thanks to the StriteR@UWA Q&A community for the answers

UGUI

Q: Can the UGUI Image be modified by setting the MaterialPropertyBlock? How to set it?

A: UGUI does not use material blocks, please refer to:


https://forum.unity.com/threads/big-problem-with-lacking-materialpropertyblock-for-ui-image.506941

Thanks to Fan Shiqing @uwa Q&A community for providing the answer

UGUI

Q: There is a plugin called UniwebView that displays web pages on the top of the Unity interface by default, blocking all Unity displays. How to make the Android native interface become the background, make the Unity UI become the foreground, and the Unity background itself should be transparent.

A: The official documentation says it’s not possible to get UniwebView to go behind the Unity interface.

https://docs.uniwebview.com/guide/faq.html

For Unity background transparency, a check box in Player Settings should do the trick.

Thanks to the Xuan@UWA Q&A community for the answers

Playable

Q: We used Timeline for our effects. When we did a performance test recently, we found that the Internal_CreatePlayable method had a lot of overhead, but only one track, so we were wondering what this overhead was related to.

I loaded the same object on a PC, and it took 18.55ms for Internal_CreatePlayable. Then I created a Cube and added a Timeline effect on the PC, and it took about 4ms for Internal_CreatePlayable.

A: Timline’s first createPlayable is time-consuming. A simple Timeline resource on my side takes 500+ms for the first time.

However, you can avoid stalling by putting the steps of createPlayable in the scene load time. After loading Timeline resources, call PlayableDirector. RebuildGraph CreatePlayable will, equivalent to Prewarm. After that, call Play() again, and the createPlayable won’t happen. And then call RebuildGraph, and because of the cache, it’s not too time consuming.

The figure below shows the elapsed time for the first call to RebuildGraph.

The figure below shows the elapsed time for the second call to RebuildGraph.

Thanks to the Prin@UWA Q&A community for the answers

Urp Learn is a project to Learn Unity Universal Render Pipeline. https://lab.uwa4d.com/lab/6010aca80f247485d9c39878

That’s all for today’s sharing. Life, of course, is long and knowledge is long. The questions you may see are just the tip of the iceberg in the long development cycle, and we have already prepared more technical topics for you to explore and share on the UWA Q&A website. Welcome you who love progress to join us. Maybe your method can solve others’ urgent need. And the “stone” of another mountain can attack your “jade”.

Website: www.uwa4d.com Official Technology Blog: blog.uwa4d.com Official Q&A Community: Answer.uwa4d.com Official Technology QQ Group: 793972859 (The original group is full)