1. Digital

The pictures that the computer can display are digital images, which can be obtained by software drawing, digital camera shooting, scanner and so on

The difference between digital camera and traditional camera

Traditional cameras use a silver sensitive material called film, which is developed to get a picture

Digital camera shooting converts optical signals into digital signals, which are stored in a storage unit

2. Pixel (Picture Element)

The smallest representation unit of a digital picture, expressed as several bytes of memory in computer memory, can be divided into color photos, black and white photos, gray photos and so on according to the different number of bytes contained in each pixel, that is, the different number of colors contained.

3. Color space

Color space (English :Color space) is the organization of colors. With the help of color space and testing against physical devices, fixed analog and digital representations of color can be obtained. Color space can be defined by just picking a few colors at random. For example, the Pantone system takes a specific set of colors as samples and gives each color a name and a code. It can also be based on a rigorous mathematical definition, such as Adobe RGB, sRGB.

Common color space: gray (black and white), RGB(red, green, blue), CMYK(cyan, magenta, yellow, black), HSV(hue, saturation, lightness)

4. Monitors

Cathode ray tube display (CRT) is mainly composed of five parts: electron gun, deflection coil, shadow mask, phosphor layer and glass shell. Although the CRT flat display, with large viewing Angle, no bad point, color reduction degree is high, uniform color, can be developed section of multiresolution model, a very short response time LCD display is difficult to go beyond the advantages of using three beam of electrons to motivate a group of correct intensity of red in color phosphor, green and blue mix, to present the required color at each pixel

Liquid crystal display (LCD) inside a lot of liquid crystal particles, they regularly arranged into a certain shape, and they each side of the color is different, divided into red, green and blue. These three primary colors can be reduced to any other color. When the display receives the display data, it controls the rotation of each liquid crystal particle to a different color surface, thus combining different colors and images. Also because of this,LCD display shortcomings are not bright enough color and visual Angle is not large

Light-emitting diode display (LED), made of gallium (Ga), arsenic (As), phosphorus (P), nitrogen (N) and other compounds, gallium arsenide diode emits red light, gallium phosphide diode emits green light, silicon carbide diode emits yellow light, gallium nitride diode emits blue light

5. The Bitmap (Bitmap)

The memory binary sequence that converts an image into a piece is called a bitmap, which belongs to the interface between hardware and software. For example, the bitmap of a black and white image has only 0 and 1 bits, while a color image needs a larger bitmap for storage.

6. Picture classification

JPG,JPEG,PNG,GIF, etc., images have been stored in the form of pixels, no longer raster, suitable for drawing static complex images

Vector images such as SVG,EPS, and PDF consist of instructions to draw lines, curves, and shapes instead of pixels. Each part of a vector graphic is editable and can be easily resized. Vector graphics are great for making charts or graphics. They generally cannot be used to store photographic images

7. Image compression algorithm

Lossless compression

  • Dynamically programmed compression

Set the pixel point as Pn, and put the pixel sequence {P1, P1… Pn} to set a breakpoint and split it into sections. The process of segmentation is to find a break point so that the maximum gray value of the pixels in a segment is small, so that the segment of pixels (which would otherwise need 8 bits) can be represented with fewer bits (say 7 bits), thereby reducing storage space.

  • Channel length encoding

Replace continuous symbols with a symbol value or string length that have the same value (continuous symbols form a continuous “stroke”). Hence the name stroke coding), making the symbol length less than the length of the original data

  • LZ77

Compression is achieved by encoding long strings (also called phrases) into short tokens that replace dictionary phrases with small tokens

  • Huffman coding

Construct the code word with the shortest average length of different prefix according to the probability of character occurrence

Lossy compression

  • JPEG — Discrete cosine transform + quantization +RLE+Huffman

8. Graphics card and GPU

Graphics card function: the computer digital signal into analog signal to display, and image processing function, can help CPU work, improve the overall speed

Hardware acceleration: A technique that offloads a particular type of heavy computation to specialized hardware in order to lighten the load on the central processing unit, such as the graphics processing unit (GPU)

CPU is mainly responsible for multi-task scheduling, scattered calculation and directing the work of each hardware. GPU is mainly used for a large number of repeated calculations (mining), and the graphics card belongs to CPU

It can be seen that THE GPU presents an overall neat structure with a large number of small computing units that can perform a large number of specific operations in parallel

9. Computer displays the image process

Generally, the CPU calculates the display content and submits it to the GPU. Of course, there are also CPU commands that let the GPU process the display content directly (hardware acceleration), and the graphics card then converts the digital analog signal (display content) into image data signal, and the signal line is connected to the display! The display receives the relevant signal, by the video amplifier circuit through the picture tube electron gun shot to the picture tube screen

To be specific:

  1. The CPU transports data to the GPU
  2. The GPU does the necessary rasterization
  3. The GPU transports rasterized data to video memory
  4. The graphics card will display the digital – analog conversion of the data and inform the display
  5. Displays are based on the principle of their specific phenomena, such as cathode-ray tube displays that aim three beams of electrons at the same spot to show color

The rasterizer

Rasterization is the process of converting the geometric data of vector graphics into pixels after a series of transformations and presenting them on display devices.

A common representation of a digital 3D model is a polygon. Before rasterization, polygons are decomposed into triangles, so a typical problem to be solved in 3D rasterization is the rasterization of triangles. The property typically required by a triangle rasterization algorithm is to rasterize two adjacent triangles (i.e. those that share edges)

11. OpenGL

OpenGL is a set of interface specifications, with cross-platform, cross-programming language characteristics, direct specification of hardware driver interface, defined the operation of pictures and graphics software layer API, specific implementation requires GPU manufacturers to develop their own to meet the specifications of the driver, if the GPU manufacturers have implemented the driver and installed on the machine, can pass The Opengl interface calls the corresponding graphics card driver.

The interface consists of nearly 350 different function calls used to draw everything from simple graphics bits to complex 3D scenes, while another program interface system is Direct3D for Microsoft Windows only.OpenGL is commonly used in CAD, virtual reality, scientific visualization programs and video game development.

12. OpenGL ES

OpenGL ES (OpenGL for Embedded Systems) is a subset of OpenGL 3D graphics API, designed for Embedded devices such as mobile phones, PDAs and game consoles

OpenGL ES is customized from OpenGL tailoring, removing glBegin/glEnd, quadrangles (GL_QUADS), polygons (GL_POLYGONS) and other complex primitives that are not absolutely necessary. After years of development, there are two versions,OpenGL ES 1.x for fixed pipeline hardware,OpenGL ES 2.x for programmable pipeline hardware. OpenGL ES 1.0 is based on the OpenGL 1.3 specification, and OpenGL ES 1.1 is based on the OpenGL 1.5 specification. Both of them support common and Common Lite respectively The Profile supports only fixed-point real numbers, while the Common Profile supports both fixed-point and floating point numbers. OpenGL ES 2.0 is defined by reference to the OpenGL 2.0 specification. The Common Profile was released in 2005-8, introducing support for programmable pipelines.

13. WebGL

WebGL is a cross-platform, royalty-free Web standard based on the low-level 3D graphics API of OpenGL ES and exposed to ECMAScript via the HTML5 Canvas element

WebGL 1.0 exposes the OpenGL ES 2.0 feature set; WebGL 2.0 exposes the OpenGL ES 3.0 API

14. GLSL (OpenGL Shading Language)

OpenGL Shading Language OpenGL Shading Language is the Language used for Shading programming in OpenGL, that is, short custom programs written by developers who are graphics card gpus (graphics Processor units) Instead of a fixed part of the rendering pipeline, the different layers in the rendering pipeline are programmable. For example: view conversion, projection conversion and so on. GLSL(GL Shading Language) Shader code is divided into two parts :Vertex Shader, Fragment Shader, and sometimes Geometry Shader. The vertex shader is responsible for running the vertex shader. It can get the current state in OpenGL, which is passed by GLSL built-in variables. GLSL uses C as its basic high-level coloring language, avoiding the complexity of using assembly language or hardware specification language