Compiled from Github, heart of machine, participation: Siyuan, Lu Xue.

Image restoration is very attractive in application, often designers need to use Photoshop to fix the gaps around the image. The process is time-consuming and meticulous, so there have long been attempts to automate it using machine learning models. This article introduces the DeepCreamPy project, which automatically fixes gaps and mosaics in comic strip images. The project is based on Nvidia’s proposed use of partial convolution to repair irregular gaps in images a few months ago.

This article gives a brief overview of this research and the DeepCreamPy implementation project. Readers can download the project code or pre-built binaries and try to repair the comic images or mosaics. This project can be inferred directly from the CPU, and Windows users can run pre-built files to repair images without even needing an installation environment.

Project address: github.com/deeppomf/De…

Image repair tasks can be used for a variety of applications. For example, for image editing: remove unwanted image content and fill the gap after removal with reasonable image content. Previous deep learning methods focused on a rectangular area in the center of an image, often relying on expensive post-processing. The method based on the DeepCreamPy project proposes a new model of image repair that robubly generates meaningful predictions over irregular vacancy patterns (FIG. 1) that perfectly match the rest of the image without additional post-processing or blending operation.

Figure 1: the corresponding restoration results of the original image and the restoration using the partial convolution based network proposed in this study.

Recent image restoration methods that do not use deep learning techniques use statistics from the rest of the image to fill in the gaps. PatchMatch [3], one of the current optimal methods, iteratively searches for the most suitable image blocks to fill the gaps. Although the results produced by this method are generally smooth, it is limited by the available image statistics and does not have the concept of visual semantics. For example, in Figure 2 (b), PatchMatch can quickly and smoothly fill the vacant part of the painting with images from surrounding shadows and walls, but the semantic sensing method uses image blocks from the painting itself.

Deep neural networks learn semantic prior and meaningful hidden representations in an end-to-end manner, which has been used in recent image repair work. These networks apply convolution filters to the images, replacing the missing content with fixed values. As a result, these methods rely on the value of the initial vacancy, which is often represented by a lack of texture and obvious color contrast in the vacancy area or artificial edge response around the vacancy. Figures 2 (e) and 2 (f) show an example of a U-NET architecture using a typical convolution layer initialized with different vacancy values (both training and testing use the same initialization scheme).



Figure 2: Effects of different image restoration methods.

Another curve of many recent methods is to focus only on the empty part of the rectangle, usually in the center of the image. The study presented in this paper finds that these defects may lead to overfitting of the rectangular vacancy and ultimately limit the application usability of these models. Pathak and Yang et al. assumed that there was a 64 × 64 square vacancy in the center of a 128×128 image, while Iizuka et al. further removed this central vacancy hypothesis to deal with the irregular shape of the vacancy. However, quantitative analysis could not be performed on a large number of images with irregular masks (51 test images in [8]). To address the problem of irregular masks, which are more common in practice, the DeepCreamPy project used a method that collected a large number of images with irregular masks of different sizes and analyzed the effect of mask size and how masks relate to image boundaries.

In order to properly handle irregular masks, the Nvidia study proposes a Partial Convolutional Layer, which includes masks and re-normalized Convolutional operations, as well as subsequent mask-updates. The concepts of mask and re-standardized convolution operation refer to segmentation-aware convolution of image segmentation task [9], but they do not modify the input mask. In this study, partial convolution is used, that is, a binary mask is given, and the convolution result only depends on the non-vacancy region of each layer. The main extension of this study is the automatic mask update step, which removes any mask and partially convolution can run on unmasked values. Given enough layers of continuous updates, even the largest mask gap can be eliminated eventually, leaving only a valid response in the feature graph. Part of the convolution layer finally allows the model to ignore placeholder vacancy values.

DeepCreamPy Image restoration project

Recently deeppomF has open-source a restoration implementation of Image Inpainting for Irregular Holes Using Partial Convolutions, which primarily uses deep full convolutional networks to repair comic images. DeepCreamPy reconstructs occluded caricature images into authentic portraits, and uses an irregular Mask, unlike normal image fixes.

The user needs to specify the area to be covered in green beforehand, using a simple drawing tool or Photoshop. A “corrupted image” with a green Mask and an image reconstructed by DeepCreamPy are shown below.

In fact, many excellent image restoration projects have been open source in the past, such as DeepFillv1 and DeepFillv2 completed by researchers such as JiahuiYu, but DeepFillv2 has not released the code. Heart of the machine also tried DeepFillv1, but it worked very well on the given test images, and only moderately on the images we provided.

Figure 3: The effect shown by the first action deepfillV1 project, and the effect of the second action reconstruction.

According to DeepCreamPy’s project, the GitHub project focuses on the ability to repair comic book images of any size and masks of any shape, as well as mosaics in comics, which are still somewhat unstable. In addition, the author of the project says he is working on a visual interface that he may be able to use later to test the power of comic book image restoration.

So far, the project author has released the pre-built binary file, and Windows only needs to download this file to run directly. Of course, other systems can also run the pre-training model based on the project, or simply retrain the model.

  • Pre-built models can be downloaded at github.com/deeppomf/De…

  • Pre-training model address: drive.google.com/open?id=1by…

If the reader is using a pre-training model or retraining, the project requires that our computing environment include the following tools:

  • Python 3.6

  • TensorFlow 1.10

  • Keras 2.2.4

  • Pillow

  • h5py

Importantly, running inference alone to fix images does not require GPU support and has been tested on Ubuntu 16.04 and Windows 64-bit systems. The TF version 1.10 used for this project was compiled in Python 3.6, so it is not compatible with Python 2 or 3.7. Readers who want to give it a try can run the following code to install the libraries required for the project:

$ pip install -r requirements.txt

DeepCreamPy usage method

1. Repair the bar void

For each image you want to fix, use image editing software (such as Photoshop or GIMP) to color the area you want to fix green (0,255,0). It is highly recommended to use the pencil tool rather than the brush. If you don’t use pencils, make sure the tools you use have anti-aliasing turned off.

The authors themselves use the Wand Selection tool (with the anti-aliasing function turned off) to select the vacant areas. Then expand the selected area slightly and use the green (0,255,0) paint bucket tool on the selected area.

To Expand the selected area in Photoshop, go to Selection > Modify > Expand or Contract. To expand the selected region in GIMP, do the following: Select > Grow. Save these images as PNG and into the decensor_INPUT folder.

A. Compiling binary files (Windows)

Double-click the Decensor file to fix the image.

B. Run it from scratch

Run the following line to fix the image:

$ python decensor.py

After repair, the image will be saved to decensor_output folder. Each image will take a few minutes to repair.

2. Repair the vacant part of the Mosaic

Perform the same coloring steps as you did to fix the bar vacancy, placing the colored image in the decensor_INPUT folder. Also, place the original, uncolored images in the decensor_input_original folder and make sure that each original image has the same name as the colored version.

For example, if the original image name is meride.jpg, you put it in the decensor_input_original folder; The colorized image is named meride.png and placed in the decensor_input folder.

A. use binary

Double-click decensor_mosaic to fix the image.

B. Run it from scratch

Run the following line to fix the image:

$ python decensor.py –is_mosaic=True

After repair, the image will be saved to decensor_output folder. Each image will take a few minutes to repair.

troubleshooting

If your Decensor output is as follows, the repair area is not properly shaded.

Here are some examples of good and bad colored images.

Thesis: Image Inpainting for Irregular Holes Using Partial Convolutions


Links to papers: arxiv.org/pdf/1804.07…

Abstract: Existing image repair methods based on deep learning use standard convolutional networks to repair damaged images, using convolutional kernel responses conditioned on valid pixels and alternative values (usually mean values) in the mask region. This often leads to problems such as chromatic aberration and blurring. Post-processing is often used to reduce such problems, but is expensive and carries a risk of failure. We propose a partial convolutional network in which the convolution is masked and re-normalized to be conditional on valid pixels only. We also include a mechanism that automatically generates updated masks for the next layer as part of the forward pass. For irregular masks, our model is superior to other methods. Our method is verified by qualitative and quantitative comparison with other methods.