Today, mobile cameras are one of the most important tools we can’t live without.

 

Besides the processor, the camera is one of the key selling points for us every time a phone is released. But do you really know how it works?

 

See here, some people want to ask, brother, I use the camera, do I need to know how it works? I don’t know how to build a car, does that affect me driving a Ferrari with one hand?

 

Well, you’re right. There’s really no need to know. But I’m going to, because THAT’s all I’ve done this week. ☺

 

First of all, these are all familiar camera types. There are two kinds of photography:

1, the CCD Sensor

2, CMOS Sensor

 

CMOS Sensor cameras are the most widely used cameras in our daily life. It’s also the basis for today’s mobile phones to be highly integrated with so many cameras.

However, we first press the button on the CMOS Sensor.

Let’s talk about CCD sensor. Why, you ask? After all, the first camera in the world was a CCD Sensor.

 

CCD Sensor

Full name: Charge Couple Device. Well, that concludes our introduction to CCD. After all, people who make mobile phone Camera HAL and ISP algorithms don’t need to know so much about CCD. After all, I learned, this life also can’t use ah ~

Kidding, will be the difference between it and CMOS

 

CMOS Sensor

Full name: Complementary Metal Oxide Semiconductor

 

After introducing their names, let’s talk about their differences:

1. Differences in internal structure:

① The imaging points of CCD are arranged in an X-Y vertical and horizontal matrix, and each imaging point is composed of a photodiode and an adjacent charge storage region controlled by it. The complex structure increases power consumption and costs. In simple terms, it is a line of output under the control of a synchronous clock.

CMOS imaging is one-time. All the points on the Sensor accept photons at once and convert them into electrical signals. As a result, the imaging speed is greatly increased.

 

Of course, this is not the main reason for the obsolescence of CCDS. CCD will be eliminated, is the integration of CMOS is much greater than CCD, we all know that the internal space of mobile phone, every inch of land is gold, as long as there is space, there will be other black technology, other selling points, or can only lag behind. So, when mobile phone manufacturers are racking their brains to squeeze space, CMOS has naturally become the mainstream.

 

2, external structure difference (sensor application structure in the product)

① CCD output electrical signals also need to be processed by subsequent address decoder, digital analog converter, image signal processor, and also need to provide three groups of different voltage power supply and synchronous clock control circuit, integration is very low.

CMOS photoelectric sensor can integrate all components into a chip, such as photosensitive original, image signal amplifier, signal reading circuit, digital-to-analog converter, image signal processor and controller, etc., can be integrated into a chip, but also has the advantage of additional DRAM.

To put it simply, CMOS has a higher degree of integration, can integrate more devices, and is suitable for the subsequent processing of images, including a series of algorithms such as denoising and brightening.

 

So I don’t have any advantage with CCD?

That is not the case. As we all know, the biggest impact of mobile phone photography on imaging is noise. Although A lot of noise removal work has been done in ISP, noise is still inevitable for some low-level and low-cost CMOS Sensor cameras. Especially if the brightness is too low. CCD sensors perform well under low light conditions and are not affected by digital noise like CMOS sensors.

 

Example of a mobile phone Camera

 

 

That’s a lot of talk, but I haven’t talked about how the Camera works.

Let’s take a look at how a Camera converts real world light into digital information

 

There are two devices that play the biggest role in this process.

1. Color Filter Array (CFA

2. Photosites

 

Sensor In order to capture color images, a color filter array (CFA) is required

The most common is the Bayer Filter, which consists of alternating filters of the three primary colors of BGR.

Usually half of the filters are green, and a quarter are red and a quarter blue. This is because the human eye is more sensitive to green light.

This is also commonly known as RGGB camera.

 

There are also other filter types available, such as the RYYB camera of huawei P30 series. It’s replacing the green filter with the yellow filter. In order to achieve a higher amount of light into the dark scene to improve the photography effect. This is the night view mode that Huawei advertises.

Admittedly, there are a lot of night scene AI algorithms out there (more on that later). But the initial brightness boost from RYYB is clearly huge.

However, brightness improvement also brings disadvantages. After all, the world is equivalent exchange. If you remember what I said earlier, CMOS will be affected by digital noise, so the increase of light input will bring more noise points, which will make the image processing difficult.

 

There is also the QCFA(Quad Bayer Filter). The current 64MP Camera is composed of four Bayer filters. It is to cut the original pixel into four parts and apply different filters.

 

 

Then let’s talk about the work of photoreceptors:

Each sensor will have millions or even tens of millions of light points. (Of course, this point can be broadly equivalent to pixels, but this is not quite accurate, but press not to list). When the shutter is open and the sensor is exposed to light, these spots capture as much light as possible. The photons captured by each spot are then converted into electrical signals. The strength of the electrical signal is related to the number of photons captured by the spot.

The best way to understand this is to think of each photosite/ pixel as a bucket to collect rainwater. The rain represents the light that enters the camera and is captured by the photosensitive spot. If you fill the bucket all the way to the top, the camera’s processor determines that it is a white pixel. Black pixels if the bucket is empty. Anything else in between will be a varying grey intensity.

 

So at this point, one might ask, black, white, gray? Isn’t it in color? That’s right. Congratulations, blind you found the spot.

That’s right. You can only get black, white and gray information through the dots.

Only through the black, white and gray pixels, the interpolation algorithm, namely Demosaic (remove Mosaic), we can obtain the color of the graph

 

So at this point, you might say, hey, we finally got the picture!

Well, not yet, because that’s the input to the algorithm.

 

Our Camera has to handle it a little bit more.

The initial image captured by the sensor is then transferred to the image Signal processor (ISP) for processing. ISP job, the first step is to go to Mosaic (DemoSAIC). That is, according to the above RGGB information, pixel interpolation algorithm is carried out to obtain the pixel value of each pixel.

Demosaic algorithm processing

Yeah, so at this point, we can say, we’ve got the graph. But not raw images.

As I said, the ISP did a lot of algorithm-based work to get the image in front of us. These include:

Denoise, Lens shade correction, defect Pixel correction, etc.

 

Of course, all this is preliminary calibration work. When this is done, the ISP outputs “raw” raw data. The ISP is also responsible for setting parameters such as 3A algorithms (auto white balance, auto focus, auto exposure). And HDR, night mode, EIS, image compression and other algorithms are completed in ISP. We’ll talk about that later.

 

 

Welcome to pay attention to my personal public number [public number: image processing these things], there are more good health oh ~