This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!

* Welcome to search for “Python Researcher” on wechat!

原文 : no, my cards are on the table! I realized the function of cartoon portrait *

Cut it out! I’m on the table! Elder brother Chen realized the function of portrait cartoon, playing the absolute addiction!

Today we’re going to do something interesting: cartoonish. The cartoon function of portrait in this paper does not use the third-party interface function. Brother Chen knows that Baidu provides the interface, which can be solved by a dozen lines of code, but the number of uses is limited. So Brother Chen builds the neural network model and trains with the help of the data set, and finally obtains the model.

In this way, the cartoon function of the portrait can be used by the way, and can also be improved by improving the quality of the data set or adjust parameters, so that the cartoon portrait generated more realistic!

Here’s the effect:

After seeing the effect, isn’t it amazing? Chen brother tells you, it’s actually very simple, after reading this article, you can also directly create your own favorite animation avatar.

01. Build the environment

Here Chen elder brother uses the open source platform Github source code, which has a complete model structure, model files, data sets and so on. The project address is below

The project is as follows: https://github.com/minivision-ai/photo2cartoonCopy the code

Readers of this article may not have the foundation of deep learning, but that’s ok, Chen Brother shows you step by step how to put this project together and generate your own cartoon avatar!

1. Install the library

After you download the source code, before running, first set up the running environment.

The above is the library that needs to be installed in the project. In fact, it can be installed by the following four commands (the project TensorFlow-GPU says that it needs to be run under the GPU graphics card, but actually it can be run on our own laptop or computer).

pip install onnxruntime
pip install face-alignment
pip install pytorch
pip install tensorflow==1.15

Copy the code

Tensorflow version 1 is installed, not version 2, or the following error is reported (reason: versions 1 and 2 are very different)

2. Download the model and data set

After downloading the code from Github, the directory structure looks like this:

Here is a brief introduction to the related folders and files are used for what?

Folders:

Dataset: stores the training dataset

Images: Store the test data set (the image store folder used to predict the effect of the model)

Models: The trained models are stored in this directory

Utils: It contains py files that process images, model structures, and so on

Py files:

Train. py: training model

Test.py: test model (generates a cartoon portrait)

There are two main py files that need to be learned, namely how to train the model and how to use the model (generate animation portrait).

Trained models and data sets have been provided in the project

These files (models, data sets, etc., have been trained) are not included in the project, so we need to download them again. After downloading, we can put them into the corresponding folder through the above download address.

So we have set up the environment for this project!

02, generate portrait animation

1. Training model

(The training model has been downloaded. If you want to use it directly, you can skip this step and directly test and generate portrait animation)

After the operation environment is set up, the training model can be started.

First, the data set is preprocessed:

python data_process.py --data_path YourPhotoFolderPath --save_path YourSaveFolderPath
Copy the code

And then start training

python train.py --dataset photo2cartoon
Copy the code

If you have downloaded the pre-training model, you can also train on the basis of the pre-training model

python train.py --dataset photo2cartoon --pretrained_weights models/photo2cartoon_weights.pt
Copy the code

After the training, the models are saved in the Models folder.

2. Test the generation of portrait animation

Place the original image that you want to generate the animator under images:

Execute the generate command:

/images/cartoon_lx.png /image/cartoon_lx.py --photo_path ./images/lx.jpg --save_path ./images/cartoon_lx.pngCopy the code

There are two models provided in the project, so there are two when executing the generate command (select one to execute). The original image is lx.jpg, and the generated cartoon portrait is cartoon_lx.png.

The running results are as follows:

You can see from a human portrait successfully generated animation portrait.

The whole process is relatively simple, and those who have not learned deep learning can also run the code!

03, subtotal

This article taught you how to generate the corresponding animation diagram through a real person, in the article also explained in detail how to build the environment, and the code to run up.

Since many readers may not be in the deep learning direction, this article will not go into the details of the code, just teach you how to run the code, interested readers can try!

Do try it! Do try it! Do try it!