“This is the 13th day of my participation in the First Challenge 2022. For details: First Challenge 2022”

What is color space

For better image processing, we sometimes use different color Spaces. The mathematical model of color space is an abstract concept, the color is people’s eyes to the light of different frequencies of different feelings, in order to better represent colors, people a variety of color model is established to one-dimensional, 2 d, 3 d coordinate system to describe different colour, the coordinate system can define the color range of the color space.

Fundamentals of Color Space

The basics of color space in popular OpenCV — RGB, CIE L* A * B *, HSL, HSV, and YCbCr are introduced.

OpenCV provides more than 150 color space conversion methods to perform the transformation required by the user. In the following example, I will demonstrate how to convert an image loaded in RGB color space to another color space (for example, HSV, HLS, or YCbCr).

Display color space

The commonly used color space is shown in the following table:

Color space Introduction to the
RGB Additive color space, where specific colors are represented by red, green, and blue component values, works in a similar way to human vision, so the color space is very suitable for computers to display images and graphics
CIELAB Also known as CIE LaB *, or simply LAB, represents a particular color as three values, where L* represents brightness, A * represents green-red component, and B * represents blue-yellow component, and is commonly used in some image processing algorithms
HSV HSV is a deformation of RGB color space, in which three components of hue, saturation and value are used to represent specific colors
HSL Also known as HLS or HSI (I for Intensity). Very similar to HSV except that it uses lightness instead of brightness.
YCbCr A series of color Spaces used in video and digital photography systems that represent colors in terms of chromaticity components (Y) and two chromaticity components (Cb and Cr) are popular in image segmentation

In the following example, the image is loaded into the BGR color space and converted to the above color space. In this script, the key function is cv2.cvtColor(), which converts an input image from one color space to another.

def show_with_matplotlib(color_img, title, pos) :
    img_RGB = color_img[:, :, ::-1]
    ax = plt.subplot(3.6, pos)
    plt.imshow(img_RGB)
    plt.title(title, fontsize=8)
    plt.axis('off')

image = cv2.imread('example.png')

plt.figure(figsize=(12.5))
plt.suptitle("Color spaces in OpenCV", fontsize=14, fontweight='bold')

gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

(bgr_b, bgr_g, bgr_r) = cv2.split(image)

# Convert to HSV
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
(hsv_h, hsv_s, hsv_v) = cv2.split(hsv_image)

# Convert to HLS
hls_image = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)
(hls_h, hls_l, hls_s) = cv2.split(hls_image)

# convert to YCrCb
ycrcb_image = cv2.cvtColor(image, cv2.COLOR_BGR2YCrCb)
(ycrcb_y, ycrcb_cr, ycrcb_cb) = cv2.split(ycrcb_image)

show_with_matplotlib(image, "BGR - image".1)

# Show gray image:
show_with_matplotlib(cv2.cvtColor(gray_image, cv2.COLOR_GRAY2BGR), "gray image".1 + 6)

# Display RGB component channels
show_with_matplotlib(cv2.cvtColor(bgr_b, cv2.COLOR_GRAY2BGR), "BGR - B comp".2)
show_with_matplotlib(cv2.cvtColor(bgr_g, cv2.COLOR_GRAY2BGR), "BGR - G comp".2 + 6)
show_with_matplotlib(cv2.cvtColor(bgr_r, cv2.COLOR_GRAY2BGR), "BGR - R comp".2 + 6 * 2)

# Show other color space component channels
#...
Copy the code

The order of channels (BGR or RGB) should be explicitly specified when performing color space transformations:

Load the image into the BGR color space
image = cv2.imread('color_spaces.png')
# Convert it to HSV color space
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
Copy the code

You can see that cv2.color_bgr2hsv is used instead of cv2.color_rGB2HSv.

Different effects of different color space in skin segmentation

The above color space can be used for different image processing tasks and techniques. Taking skin segmentation as an example, we examine the effect of skin segmentation performed by different algorithms in different color Spaces.

The key function in this example, in addition to the cv2.cvtcolor () function described above, is cv2.inrange (), which checks whether the array contains elements between the elements of the two array arguments accepted (the lower-bound array and the upper-bound array).

Therefore, we use the cv2.inrange () function to detect the color corresponding to the skin. The values defined for these two arrays (lower and upper bounds) play a crucial role in the performance of the segmentation algorithm, and you can experiment by modifying the upper and lower bounds arrays to find the most appropriate values.

import numpy as np
import cv2
import matplotlib.pyplot as plt
import os

# Name and path of the images to load:
image_names = ['1.png'.'2.png'.'3.png'.'4.png'.'5.png'.'6.png']
path = 'skin_test_imgs'


# Load all test images building the relative path using 'os.path.join'
def load_all_test_images() :
    """Loads all the test images and returns the created array containing the loaded images"""

    skin_images = []
    for index_image, name_image in enumerate(image_names):
        # Build the relative path where the current image is:
        image_path = os.path.join(path, name_image)
        # print("image_path: '{}'".format(image_path))
        # Read the image and add it (append) to the structure 'skin_images'
        img = cv2.imread(image_path)
        skin_images.append(img)
    # Return all the loaded test images:
    return skin_images


# visualization
def show_images(array_img, title, pos) :
    for index_image, image in enumerate(array_img):
        show_with_matplotlib(image, title + "_" + str(index_image + 1), pos + index_image)


# 
def show_with_matplotlib(color_img, title, pos) :
    Convert the BGR image to RGB
    img_RGB = color_img[:, :, ::-1]

    ax = plt.subplot(5.6, pos)
    plt.imshow(img_RGB)
    plt.title(title, fontsize=8)
    plt.axis('off')

# upper and lower bound array
lower_hsv = np.array([0.48.80], dtype='uint8')
upper_hsv = np.array([20.255.255], dtype='uint8')

# HSV color space based skin detection
def skin_detector_hsv(bgr_image) :
    hsv_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2HSV)
    # Look for regions with skin tones in the HSV color space
    skin_region = cv2.inRange(hsv_image, lower_hsv, upper_hsv)
    return skin_region

lower_hsv_2 = np.array([0.50.0], dtype="uint8")
upper_hsv_2 = np.array([120.150.255], dtype="uint8")


# HSV color space based skin detection
def skin_detector_hsv_2(bgr_image) :
    # Convert images from BRG color space to HSV space
    hsv_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2HSV)

    # Look for regions with skin tones in the HSV color space
    skin_region = cv2.inRange(hsv_image, lower_hsv_2, upper_hsv_2)
    return skin_region

lower_ycrcb = np.array([0.133.77], dtype="uint8")
upper_ycrcb = np.array([255.173.127], dtype="uint8")

# Skin detection based on YCrCb color space
def skin_detector_ycrcb(bgr_image) :
    ycrcb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2YCR_CB)
    skin_region = cv2.inRange(ycrcb_image, lower_ycrcb, upper_ycrcb)
    return skin_region

# Threshold setting for skin detection based on BGR color space
def bgr_skin(b, g, r) :
    # value based on the paper "RGB-H-CBCR Skin Colour Model for Human Face Detection"
    e1 = bool((r > 95) and (g > 40) and (b > 20) and ((max(r, max(g, b)) - min(r, min(g, b))) > 15) and (abs(int(r) - int(g)) > 15) and (r > g) and (r > b))
    e2 = bool((r > 220) and (g > 210) and (b > 170) and (abs(int(r) - int(g)) <= 15) and (r > b) and (g > b))
    return e1 or e2

# Skin detection based on BGR color space
def skin_detector_bgr(bgr_image) :
    h = bgr_image.shape[0]
    w = bgr_image.shape[1]

    res = np.zeros((h, w, 1), dtype='uint8')

    for y in range(0, h):
        for x in range(0, w):
            (b, g, r) = bgr_image[y, x]
            if bgr_skin(b, g, r):
                res[y, x] = 255
    
    return res

skin_detectors = {
    'ycrcb': skin_detector_ycrcb,
    'hsv': skin_detector_hsv,
    'hsv_2': skin_detector_hsv_2,
    'bgr': skin_detector_bgr
}

def apply_skin_detector(array_img, skin_detector) :
    skin_detector_result = []
    for index_image, image in enumerate(array_img):
        detected_skin = skin_detectors[skin_detector](image)
        bgr = cv2.cvtColor(detected_skin, cv2.COLOR_GRAY2BGR)
        skin_detector_result.append(bgr)
    return skin_detector_result

plt.figure(figsize=(15.8))
plt.suptitle("Skin segmentation using different color spaces", fontsize=14, fontweight='bold')

# load image
test_images = load_all_test_images()

# Draw the original image
show_images(test_images, "test img".1)

# Apply the skin detection function to each image
for i, key in enumerate(skin_detectors.keys()):
    show_images(apply_skin_detector(test_images, key), key, 7 + i * 6)

plt.show()
Copy the code

Build the Skin_detectors dictionary to apply all skin segmentation algorithms to the test image. In the above example, four skin detectors are defined. You can call the skin split detection function (such as skin_DETECtor_yCRCB) with the following method:

detected_skin = skin_detectors['ycrcb'](image)
Copy the code

The segmentation result of the program is as follows:

You can use multiple test images to see the effect of applying different skin segmentation algorithms to see how these algorithms work under different conditions.