A RGB.

It is found that white light can be decomposed into light of different colors, while red, green and blue (RGB) light cannot be decomposed, so it is called tri-color light, and the superposition of equal amount of tri-color light will constitute white light.

In computers, R, G, and B are also called “primary color components”. Each single R or G or B can be represented by an 8-bit binary system. Therefore, the value of a single R ranges from 0 to 255 (the same goes for G and B). Any color can be represented by three RGB values. For example: (50,150,250), or you can use a hexadecimal number: #3296FA.

RGB this color representation, for the UI interface development personnel, is obviously not unfamiliar, (50,150,250) this format is very much like our spatial rectangular coordinates. Therefore, we can abstract the RGB model into a unit cube of space, and any corresponding color can be represented by a point in the RGB space.

The origin is :(0,0,0), naturally black, and the vertices that are symmetric with the center of the origin are :(255,255,255) white. Any position with the same value must be gray, such as :(50,50,50), (125,125,125) and so on. The larger the value is, the more it tends to be white. In the coordinate system, the gray points fall on the diagonal between the origin and the corresponding vertices.

  1. BGR color: Similar to RGB, B and R switch when stored.
  2. RGB and grayscale

As we know, the RGB value of gray is usually the same as three channels, such as (125, 125, 125), but if we confirm that an image itself is a grayscale image, there is no need to store three channels of data, just 125. Render with (125,125,125) can render a grayscale image. But what if we want to convert an RGB image to a grayscale image? A better formula is 305911, i.e. GRAY = R *0.30 + G 0.59 + B0.11

However, the human eye is not equally sensitive to the three color components. Generally speaking, the human eye is the least sensitive to red and the most sensitive to blue. If the similarity of color is directly measured by Euclidean distance, the result will have a large deviation from human vision. For a certain color, it is difficult to deduce a more accurate value of three components to represent.

2. YUV

The visual characteristics of the human eye are more sensitive to brightness, but less sensitive to position and color. In the video coding system, in order to reduce the bandwidth, save more brightness information, reduce the preservation of color difference information, so YUV is actually a color space biased to brightness. Mainly used in video, image processing.Compared with RGB, YUV requires only a very small bandwidth (RGB requires three independent primary color signals to propagate simultaneously). In YUV, Y stands forbrightness, also called:Gray-scale valuesAnd UV is going to bechroma(Hue saturation), also known as YCbCr. If we only have Y data, then we can get a black and white image.

Use YUV format can greatly to remove redundant information, as mentioned above, the human eye is more sensitive to brightness, so the compression algorithms tend to convert RGB data to YUV data, compression less on Y (reservation details as much as possible), and for UV compression more (colour is not sensitive, difficult to perceive), balance the image effect and compression ratio.

GRAY = R *0.30 + G *0.59 + B*0.11; GRAY = R *0.30 + G *0.59 + B*0.11

However, we know that there are also two related components: UV. When we do RGB to YUV, we also calculate two related components: U and V, so as to get a ternary cubic system of equations, we can easily get the final RGB value from YUV.

//RGB to YUV Y = 0.299r + 0.587g + 0.114b U = -0.147r-0.289g-0.436b V = 0.615-0.515g-0.110b //RGB to YUV R = Y + 1.140V G = y-0.395U-0.581 V B = Y + 2.032UCopy the code
  1. A large number of floating point operations may cause some performance problems. We can use certain methods to avoid such a large number of floating point operations and speed up the operation efficiency.
  2. YUV color space is compatible with both black and white TV (as long as Y) and color TV (yuVS work together).

YUV is only a form of color space, and the specific implementation standards are divided into different storage formats due to different data arrangement methods. The general Android camera output is NV21 format, while I420 format is the default input and output format of most codecs.

In both storage modes, Y data is identical, from Y0 to Y15, as shown in the figure below. In nv21, it is arranged by VU crossover, while in I420, it is arranged by 4 U, followed by 4 V.

If the RGB image of 16 pixels, storage requires RGB three channels, a total of 16 * 3 = 48 units of space. However, using YUV, such as the YUV image after compression (U and V are compressed by four times), storage only needs 16 +4 +4 = 24 units of space. We found that we saved half the space.

If we need to convert NV21 to general-purpose I420, it is also easy to record the width and height of the entire image (w, H for short). Since each pixel requires a Y value, the value of H times w is ultimately the value occupied by the Y channel. Y0 minus Y15 in the figure above. Then interchange the YU data in Nv21 format.

Reference source
  1. bilibili
  2. Zhihu – Details RGB, HSV and HSL color space