Make writing a habit together! This is the fifth day of my participation in the “Gold Digging Day New Plan · April More text Challenge”. Click here for more details.


I’ll put the 0 out front

I believe those of you who have used cameras know thatAlong withSpecial effect is an effect that makes the focus focus on the shooting subject and makes the background dim, such as the background blur effect achieved in the end of this paper



What is the principle behind camera blur effects? What does it have to do with computer vision? This article takes you through these questions.

1 Keyhole imaging

In elementary school, we know that without light there can be no image, and in order to produce an image, a scene must have one or more, direct or indirect, light sources.

As shown in the figure, there are three main types of illumination:

  • scattering
  • Direct sunlight
  • Diffuse reflection

After obtaining the light source, light is generated from the object to the detection plane.

Since there are countless scattered rays arriving at the detection plane starting from A certain point A on the object, it can be considered that the imaging point A’ of A is evenly distributed on the imaging plane, similarly, so are other points. Therefore, in this case, the detection plane is the aliasing of numerous object images, resulting in blurred imaging or even impossible imaging.

You can’t see your face on a piece of white paper, not because there is no light from you on the paper, but because the light from different parts of you overlaps on the paper.

So how do you image it on white paper?

In fact, it is very simple, using the keyhole imaging tried in primary school

Essentially the hole acts as a == filter, retaining only a small amount of light from the object point, which should give a clear image.

2 Optical imaging

The defect of keyhole imaging is less light and low brightness. In order to obtain more light and avoid image overlap caused by scattered image points on the detection surface, a lens with light concentration is introduced. The essence of lens imaging and keyhole imaging is to avoid imaging caused by the scattering of image points. The former uses concentrating light, while the latter uses filtering light.

Lens imaging is commonly used in modern camera applications, but both lens imaging and keyhole imaging are basic models and assumptions for computer vision research, such asThe perspective geometry,Camera internal parameter matrix,Distortion correctionSo this section is very helpful for establishing the research thinking of machine vision.

3. Blur effect

After the introduction of the basic knowledge, finally began the principle of image blur effects!

An ideal lens ensures that the light is focused on a single point, the focal point, where no image aliasing occurs and the image is clearest. Before and after the focus of the light began to disperse, forming different degrees of imaging overlap area, known as the dispersion circle, for the human eye, in a certain range of images produced by the blur is unrecognizable, the unrecognizable range of the dispersion circle is called admissible dispersion circle

When focusing on the plane of the subject, the image is still clear within a certain defocus range, known as the depth of focus, because the presence of dispersion circles is allowed. The process of adjusting the imaging surface and the lens distance so that the imaging surface is in the focal depth and the object can be clearly photographed is called focusing.

Similarly, for the subject, the depth distance between objects that can be imaged relatively clearly before and after the focal plane is called the depth of field. Image blur effect is related to this depth of field!

  • The smaller the depth of field is, the smaller the area of clear image before and after the object is photographed, and accordingly the effect of blurring appears
  • The greater the depth of field, the more clearly the object can be photographed before and after, and there is no blur effect

How to adjust the depth of field? Remember one sentence:The larger the aperture, the smaller the depth of field, so when taking a cellphone photo, a large aperture also represents the blurred effect!

So next time you have a chance to take a picture of a girl, please confirm

“Do you prefer small or large depth of field?”

4 Code Practice

Camera background blur effect in image processing can be achieved by the guide filter, the source code is as follows.

// Boot filter
Mat guidedFilter(Mat& srcMat, Mat& guidedMat, int radius, double eps)
{
    srcMat.convertTo(srcMat, CV_64FC1);
    guidedMat.convertTo(guidedMat, CV_64FC1);
    // Calculate the mean value
    Mat mean_p, mean_I, mean_Ip, mean_II;
    boxFilter(srcMat, mean_p, CV_64FC1, Size(radius, radius));                      // Generate image mean mean_p to be filtered
    boxFilter(guidedMat, mean_I, CV_64FC1, Size(radius, radius));                   // Generate bootstrap image mean mean_I
    boxFilter(srcMat.mul(guidedMat), mean_Ip, CV_64FC1, Size(radius, radius));      // Generate cross-correlation mean mean_Ip
    boxFilter(guidedMat.mul(guidedMat), mean_II, CV_64FC1, Size(radius, radius));   // Generate bootstrap image autocorrelation mean mean_II
    // Calculate the correlation coefficient, covariance of Ip cov and variance of I var------------------
    Mat cov_Ip = mean_Ip - mean_I.mul(mean_p);
    Mat var_I = mean_II - mean_I.mul(mean_I);
    // Calculate parameter coefficients a and b
    Mat a = cov_Ip / (var_I + eps);
    Mat b = mean_p - a.mul(mean_I);
    // Calculate the mean values of coefficients A and b
    Mat mean_a, mean_b;
    boxFilter(a, mean_a, CV_64FC1, Size(radius, radius));
    boxFilter(b, mean_b, CV_64FC1, Size(radius, radius));
    // Generate the output matrix
    Mat dstImage = mean_a.mul(srcMat) + mean_b;
    return dstImage;
}
Copy the code

The principles of the bootstrapping filter will be explained in a new chapter next time.

Call the filter in the main function, the effect is shown at the beginning of the article.

int main(a)
{
    Mat resultMat; 
    Mat vSrcImage[3], vResultImage[3];
    Mat vResultImage[3];    
    Mat srcImage = imread("1.jpg");
    imshow("Source image", srcImage);
    // Perform channel separation on the source image and bootstrap filtering on each sub-channel
    split(srcImage, vSrcImage);
    for (int i = 0; i < 3; i++)
    {
        Mat tempImage;
        vSrcImage[i].convertTo(tempImage, CV_64FC1, 1.0 / 255.0);
        Mat cloneImage = tempImage.clone();
        Mat resultImage = guidedFilter(tempImage, cloneImage, 5.0.3);
        vResultImage[i] = resultImage;
    }
    // Merge the results after the sub-channel guidance filtering
    merge(vResultImage, 3, resultMat);
    imshow("Background Blur effect", resultMat);
    waitKey(0);
    return 0;
}
Copy the code

A small image blur effect, behind the involvement of various optical imaging principles, to build the foundation of computer vision model. Like all of us, you may feel small or a tower of strength.

Please pay attention to the complete project document ~


Computer vision basics tutorial outline

The chapter content

Color space and digital imaging

Fundamentals of computer geometry

2. Image enhancement, filtering, pyramid

3. Image feature extraction

4. Image feature description

5. Image feature matching

6 Stereovision

7 Actual Project

Welcome to my AI channel “AI Technology Club”.