preface

In the last article, we completed a pair of complementary applications of wearing masks and taking masks off. In this article, we will take a look at a recent video of batch line drawing on Tiktok. By mastering its core principle, we can easily achieve a faster solution with OpenCV.

Photoshop line extraction

Recently I saw a video on Douyin

Realize the principle of

To turn an image into a line drawing, the following steps are required:

  • Convert a color image to a grayscale image
  • Find the inverse of grayscale map
  • Gaussian blur
  • Color dodge fusion into grayscale image

OpenCV extraction line draft

We’ll use Jupyter Notebook this time to make it easier to see the image.

1. Import the library file

import cv2
from matplotlib import pyplot as plt
%matplotlib inline
Copy the code

2. Display the original image

input_img = cv2.imread("image.jpg")
plt.figure(figsize=(10.7))
plt.imshow(cv2.cvtColor(input_img, cv2.COLOR_BGR2RGB))
Copy the code

3. Turn to gray scale

gray_img = cv2.cvtColor(input_img, cv2.COLOR_BGR2GRAY)
plt.figure(figsize=(10.7))
plt.imshow(cv2.cvtColor(gray_img, cv2.COLOR_BGR2RGB))
Copy the code

4. Reverse color of grayscale map

inv_gray_img = 255 - gray_img
plt.figure(figsize=(10.7))
plt.imshow(cv2.cvtColor(inv_gray_img, cv2.COLOR_BGR2RGB))
Copy the code

5. Gaussian blur

ksize=21
sigma=0
blur_img = cv2.GaussianBlur(inv_gray_img, ksize=(ksize, ksize), sigmaX=sigma, sigmaY=sigma)
plt.figure(figsize=(10.7))
plt.imshow(cv2.cvtColor(blur_img, cv2.COLOR_BGR2RGB))
Copy the code

6. Color reduction and fusion

sketch_img = cv2.divide(gray_img, 255 - blur_img, scale=256)  # Color dodge fusion
plt.figure(figsize=(15.10))
plt.imshow(cv2.cvtColor(sketch_img, cv2.COLOR_BGR2RGB))
Copy the code

In a few lines of code, Python+OpenCV never loses. (Don’t hit me, photoshop designer!)

SketchKeras extraction line draft

Although the above OpenCV method is simple, but look carefully at the hair bun and cuff line is not clear enough, so we use neural network to achieve again.

1. Download source code

Cloning source

git clone https://github.com/lllyasviel/sketchKeras.git
Copy the code

Download the weight file mod.h5 and place it in the project directory

2. Analyze the network structure

SketchKeras is a U-NET type network. The author did not publish the model structure, but we can use the model file of its Keras to interpret the network structure form through Tensorboard.

First, convert keras to pb files

python keras_to_tensorflow.py --input_model="mod.h5" --output_model="mod.pb"
Copy the code

You can then import the model using TensorBoard

mkdir logs
python3 tensorboard_graph.py
Copy the code

The logs directory is generated under the project to export the computed graph

Tensorboard - logdir = logs/host = 127.0.0.1Copy the code

We can see that it is a typical UNET architecture, input [3 * 512 * 512 * 1] continuously down sampling to [3 * 32 * 32 * 512], and then back up sampling process.

You can also install a Netron to read the network more easily.

3. The preprocessing

Take the original image, resize it to 384*512 (512 on one side or 512 on the other), then transform it to gray scale, then do Gaussian blur, subtract the two, then normalize it to [3, 512, 512, 1] Tensor, and you’ve done the pre-processing.

from_mat = from_mat.transpose((2.0.1))
light_map = np.zeros(from_mat.shape, dtype=np.float)
for channel in range(3) : light_map[channel] = get_light_map_single(from_mat[channel]) light_map = normalize_pic(light_map) light_map = resize_img_512_3d(light_map)Copy the code

4. Inference output

According to the above analysis, the output of neural network is [3, 512, 512, 1], which needs to be scaled to the original size, and then the wireframe diagram we need can be obtained after noise reduction.

# Model reasoning (3, 512, 512, 1)
line_mat = mod.predict(light_map, batch_size=1)
# Remove batch dimension (512, 512, 3)
line_mat = line_mat.transpose((3.1.2.0))0]
# Clipping (512, 384, 3)
line_mat = line_mat[0:int(new_height), 0:int(new_width), :]
show_active_img_and_save('sketchKeras_colored', line_mat, 'sketchKeras_colored.jpg')
line_mat = np.amax(line_mat, 2)
# noise reduction
show_active_img_and_save_denoise_filter2('sketchKeras_enhanced', line_mat, 'sketchKeras_enhanced.jpg')
show_active_img_and_save_denoise_filter('sketchKeras_pured', line_mat, 'sketchKeras_pured.jpg')
show_active_img_and_save_denoise('sketchKeras', line_mat, 'sketchKeras.jpg')
Copy the code

Perfect!

Tip: Ifload_model requires h5pyIf an error occurs, install ith5pyCan.

sudo apt-get install libhdf5-dev
pip install h5py
Copy the code

As you can see, sketchKeras converted line strokes are clearer, while sketchKeras_colored includes elements of color, which will be useful for subsequent image coloring.

Download the source code

The relevant documents of this issue can be downloaded through the official account “Deep Awakening” and the background reply: “RPI14”.

Next up

Now that I’ve extracted the line draft,

Well, if you fight left and right,

In the next article we’ll use line drawings,

To “restore” the color image,

Stay tuned for…