These days I have been doing a task of image processing, which is simply to process an overexposed picture and make some details that are “covered” by bright light appear.

Note: I am dealing with medical images, so some details may not be quite the same (for example, RGB is not considered, all grayscale images).

Environment: Win10, Python 3.6.6, Jupyter Notebook, OpencV 3.4.2

Relevant codes have been uploaded to personal Github

1. Histogram Equalization

A classic algorithm is histogram equalization. I touched on this in a previous post. First of all, histogram is a graphic expression of the pixel intensity distribution of a graph, which can intuitively show us the pixel distribution of a picture.

Here’s an example:


Often, in a histogram, there may be one or more spikes with very different heights. Equalization is to stretch the distribution range of pixel intensity to enhance the local contrast of the image.

In other words, reassign pixel values. We need to create a map that maps the pixel distribution of the original image to another distribution. And for equalization, the mapping function should be a cumulative distribution function.

We don’t really need to know what this distribution function is, because OpencV has already wrapped it for us. Let me just say what I found.

Opencv and Numpy have their own implementations, but the effect is similar.

1. The opencv

equ = cv2.equalizeHist(tmp)
Copy the code

Just call this function and it’s all wrapped up.

2. Numpy implementation

Here I show you the code for making histograms manually

img = cv2.imread("test.jpg",0)
cv2.normalize(img,img, 0, 255, cv2.NORM_MINMAX)
# normalization
lut = np.zeros(256, dtype = img.dtype )
Create an empty lookup tableHist, bins = np. Histogram (img. Flatten (), 256, [0256])# Compute the histogram of the original image

Calculate the cumulative histogram
cdf = hist.cumsum()
cdf_m = np.ma.masked_equal(cdf,0) # Remove the 0 value in the histogram
cdf_m = (cdf_m - cdf_m.min())*255/(cdf_m.max()-cdf_m.min())
# is equivalent to the formula lut[I] = int(255.0 *p[I]) to reassign pixels

cdf = np.ma.filled(cdf_m,0).astype('uint8') 
The element removed from the mask is added to 0

res = cdf[img] # is essentially this line right here
Copy the code

Results the following

Gamma and Laplace transform

1. The Gamma transform is very common. In my impression, Lightroom has a special Gamma processing, and the correction of the display screen is also related to the Gamma transform.

Formula:(A is A constant, usually 1)

Specifically, the gray scale is too high or too low gray scale picture correction, enhance the contrast. The transformation formula is the product of each pixel value on the original image.

When γ>1, the histogram of gray distribution of the image has a stretching effect (the gray scale extends to the high gray value), while γ<1, the histogram of gray distribution of the image has a contraction effect (the gray scale is close to the direction of low gray value).

Suitable for low contrast and over-exposure of the camera

The Laplace transform, I think, plays a sharpening role.

In OpencV, Lapalce is more used for edge detection. To put it simply, a convolution kernel kernel is constructed first, and then each block of the image is transformed separately. A detailed mathematical explanation is presented here

3. Code presentation

Gamma transform based

# Mainly used for image correction
# The gray scale is too high or too low image correction, enhance the contrast
img = cv2.imread("test.jpg",0)
cv2.normalize(img,img, 0, 255, cv2.NORM_MINMAX)
# normalization
img_t= cv2.convertScaleAbs(img)
Uint16 to uint8Img_gamma = np.power((img_t/255.0),fgamma)*255.0Copy the code

The effect is as follows:

Laplace transform

Sharpen the image by using the second derivative of the image
# Honestly, such a bright picture is not useful

cv2.normalize(crop_1,crop_1, 0, 255, cv2.NORM_MINMAX)
img_a= cv2.convertScaleAbs(crop_1)

kernel = np.array([ [0, -1, 0],  
                    [-1,  5, -1],  
                    [0, -1, 0] ]) 
img_lap = cv2.filter2D(img_a,cv2.CV_8UC3 , kernel)
Copy the code

Here’s what it looks like (yes, it does)

Third, CLAHE

I’ve already talked about CLAHE in a previous article. I won’t go into details here, but CLAHE works well and is easy to call. Let’s just say the call:

img = cv2.imread("test.jpg",0) clahe = cv2.createclahe (clipLimit=600.0, tileGridSize=(1,1)) img_clahe = clahe.apply(img)Copy the code

ClipLimit is contrast, and tileGridSize controls the size of each processing area.

Treatment effect

In fact, if you look closely, you might see a fourth point in this diagram compared to the previous histogram equalization and Gamma transformation. (In the middle)

I think it may be because the previous algorithm is only simple formula transformation, relatively simple and rough, while in the adaptive histogram equalization, although it may not be as significant as the black and white contrast in the previous results, we did not lose important details.

Fourth, the Retinex

This is NASA’s official image processing algorithm. I looked at some examples, and the results are surprisingly good (after all, satellite image processing is very demanding).

Retinex is a commonly used image enhancement method based on scientific experiments and analysis. It was proposed by Edwin H.Land in 1963. Retinex is a combination of two words: Retina and cortex. Land’s Retinex model is based on three assumptions:

1. The real world is colorless, and the color we perceive is the result of the interaction between light and matter. The water we see is colorless, but the water film – soap film is colorful, that is the film surface light interference results. 2. Each color region is composed of red, green, and blue primary colors of a given wavelength. 3.

The principle is very optical. Simply speaking, referring to the camera imaging principle, part of the incident light source will be absorbed by the object after encountering an obstacle, and the rest will be reflected at a certain Angle of reflection and then enter the human eye.

According to Retinex theory, the brightness perceived by human eyes depends on the illumination of the environment and the reflection of the illuminated light on the surface of the object, and its mathematical expression is as follows:

Refers to two parameters of a two-dimensional plane.

Take the log of both sides of equation 1.1, get

One consideration for taking logarithms is to reduce multiplication to addition, which makes it easier to calculate.

Generally speaking, R(x,y) represents the reflection property of the object, that is, the intrinsic property of the image. It carries a lot of image information, which should be reserved to the maximum extent. S(x,y) is the image data we have observed and obtained, and L(x,y) is the irradiation component of incident light.

So in order to get R, we have to estimate S, and that’s the whole point of this algorithm. Retinex theorist states that this L(x,y) can be obtained by gaussian blur of S(x,y).

In the process of the development of this algorithm, many people have proposed improvements to the effect of the algorithm, such as the earlier single-scale retinal enhancement SSR, and multi-scale retinal enhancement ALGORITHM MSR, and the later multi-scale retinal enhancement algorithm with color restoration MSRCR…. And so on, various papers have put forward their own ways to improve, here I refer to this article to write a function of MSRCR.

At the beginning, the effect was not so good, and even the effect was the same as Laplace’s (blank). However, after referring to the use and research papers of some famous people, I realized that Retinex algorithm is suitable for dark graphs, and has no effect on high brightness graphs.

In addition, Retinex algorithm is prone to halo, which may affect the observation effect. But in my case, the interference from the halo is not that important.

The effect is as follows:

And you can see that all four points are very clearly present. The visuals are a little better than the CLAHE algorithm above, and of course you can see the halo.

The relevant call code is as follows:

Def MSRCR (img,sigma,dynamic): img_msrcr = np.zerOS_like (img*1.0) img = SSR (img,sigma)## log[R(x,y)]
    
    img_arr = img
    mean = np.mean(img_arr)
    sum1 = img_arr.sum()
    img_arr2 = img_arr * img_arr
    sum2 = img_arr2.sum()
    var_squ = sum2 - 2*mean*sum1 + 1024*1024*mean*mean
    var = np.sqrt(var_squ)
    
    Min = mean - dynamic*var
    Max = mean + dynamic*var

    for i in range(img.shape[0]):
        for j in range(img.shape[1]):
            img_msrcr[i][j] = (img[i][j] - Min) / \
                                        (Max-Min) * 255## Overflow judgment
            if img_msrcr[i][j] > 255:
                img_msrcr[i][j] = 255
            if img_msrcr[i][j] < 0:
                img_msrcr[i][j] = 0
    
    return img_msrcr
    
sigma = 30
## Specify scale (fuzzy radius)
dy = 2
The smaller the #Dynamic value, the higher the contrast of the image.
In general, Dynamic values between 2 and 3 can achieve a significant enhancement effectretinex_msrcr = msrcr(img_rev, sigma,dy) cv2.normalize(retinex_msrcr,retinex_msrcr, 0, 255, Cv2.norm_minmax) IMg_c = cv2.convertScaleabs (retinex_mSRCr) fgamma = 1.4 imG_redinex = (img_c/np. Power (255.0), fgamma) * 255.0## Gamma correction reduces the halo a bit


Copy the code