For people who know PhotoShop, image matting is very simple operation, sometimes can be a few seconds to buckle a picture. However, for some complicated images, sometimes it takes a bit of time. Today we have a very quick and easy way to use Python to batch extract people.

Results show

At the beginning, I also do not believe in what automatic matting, always feel not accurate enough to pull out a satisfactory map. So let me just show you what it looks like.

Let’s take a look at the original image:

The background of this image is a solid color, we usually use PhotoShop to pull it is relatively simple, it is not a problem for our computer, the following is the effect picture:

Because it’s a PNG image and the original image has a white background, you can’t tell the difference. To show the effect, I put the original and the edited image on a yellow background:

Such a look at the effect is much more obvious, feel the effect of matting is still very good. However, this simple picture is not so satisfying. Let’s look at a more complicated picture:

The background color of this image is a bit more complicated than before, and there is a gradient. Let’s see how it looks after matting.

The background of the original image is not white, I will not do yellow background, I feel this effect is still satisfactory.

So what about images with multiple people? Take a look at the following image:

Here are three people, let’s see if the program can figure it out:

It’s a little flawed, but it’s still pretty good.

Let’s look at the last example:

This is much more complicated than the previous figure, so how about the effect, let’s take a look:

Ha ha, not only did he identify the person, but he also identified the torch and dug it out. In general, there is no problem in completing the character matting.

How is this achieved?

After seeing the effect, you must be wondering how this is achieved. That’s where flyblade comes in. Flyblade is an open source deep learning platform that allows you to transfer learning with just a dozen lines of code.

Before using, we will install the flying paddle first. You can go to the official website and follow the instructions for quick installation:

https://www.paddlepaddle.org.cn/install/quick

For convenience, the CPU version is installed using PIP directly here. We execute the following statement:

python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
Copy the code

Once the installation is complete, you can test the success in your environment. Here I use the command line window and run python.exe first (if you have configured the environment variables) :

C:\Users\zaxwz>python
Copy the code

Then run the following code in the program:

import paddle.fluidpaddle.fluid.install_check.run_check()
Copy the code

If the console displays Your Paddle is installed successfully! Let’s start deep Learning with Paddle Now means we have installed successfully. We also need to install PaddleHub:

pip install -i https://mirror.baidu.com/pypi/simple paddlehub
Copy the code

Now we can start writing code.

Start cutout

The realization of matting code is very simple, roughly divided into the following steps:

  1. The import module

  2. Load model

  3. Get a list of files

  4. cutout

To make it easier to read the code, I will write the code clearly:

1. Import modules

import os
import paddlehub as hub
Copy the code

2. Load the model

humanseg = hub.Module(name='deeplabv3p_xception65_humanseg')
Copy the code

3. Get the file list

# image file directory
path = 'D:/CodeField/Workplace/PythonWorkplace/PillowTest/11_yellow/img/'
Get the files in the directory
files = os.listdir(path)
# For pictures
imgs = []
# Mosaic image path
for i in files:
   imgs.append(path + i)
# cutout
results = humanseg.segmentation(data={'image':imgs})
Copy the code

4. Get the file list

Let’s run this on the console:

Python cutout. PyCopy the code

Output:

[2020-03-10 21:42:34.587] [INFO] -Installing Deeplabv3P_xception65_humanseg module

[2020-03-10 21:42:34.605] [INFO] -module deeplabv3P_xception65_humanseg already installed in C:\Users\zaxwz\.paddlehub\modules\deeplabv3p_xception65_humanseg

[2020-03-10 21:42:35.472] [INFO] -0 Pretrained Paramaters Loaded by PaddleHub

After the run is complete, we can see the humanseg_output directory under the project, where the cutout images will be stored. Of course, we can also simplify the above code in obtaining the file list operation:

import os, paddlehub as hub
humanseg = hub.Module(name='deeplabv3p_xception65_humanseg')        # Load model
path = 'D:/CodeField/Workplace/PythonWorkplace/PillowTest/11_yellow/img/'    # file directory
files = [path + i for i in os.listdir(path)]    Get a list of files
results = humanseg.segmentation(data={'image':files})    # cutout
Copy the code

At this point, we have completed 5 lines of code batch matting, interested developers to give it a try!