A github project is designed to visualize the process of GAN training. The project link is as follows:

Github.com/EvgenyKashi…

The following is a brief introduction to the code, the basic function, effect and so on.

preface

This is a simple implementation of the learning and visualization of 2D GANs experimental code. After training the StyleGAN for dozens of hours, it is now possible to visually visualize some of the hyperparametric situations through rapid iterations (30s or so) (but it is not certain whether this intuitive situation can be applied to the larger GAN model). Primarily by poloclub. Making. IO/ganlab/inspired, but might be more hope someone to run the code in the Colab.

visualization

Visualization of the dynamic process of training includes:

  • Distribution of real data (black dots)
  • Input fixed noise, and then fake data generated by the G network;
  • D The decision boundary of the network for the entire input space, and the probability of using different colors to represent its output (red represents high probability of judging true data, blue represents low)
  • The green arrow represents each generated data point, maximizing the direction of the D-network output

Visual result

Here are some visualizations:

No batch-norm G and D training

Add batch-norm G and D training situations

Visualization of evaluation criteria

The first line is the training process (input is fixed noise) and various evaluation criteria (gradient normalization of G and D, losses and D’s output of true and false data). The second line shows the input noise and the activation function for the middle layer of the G-network (mapped to 2 dimensions)

G network conversion to input noise

Debugable options

  • Distribution of input data
  • Batch size, training of epochs
  • D and G learning rates (probably the most important)
  • Optimizer for D and G
  • Distribution of input noise
  • Number of neurons, activation function
  • Loss function (BCE,L2)
  • Weight initialization
  • Regularization (Batch-norm, Dropout, weight attenuation)

The CPU is used, because the visualization experiments have met the speed requirements.

Future jobs

  • Add more loss functions
  • Add more regularization techniques

You can directly visit Github to view the project code, or reply “play_gans” in the background of my official account to obtain the code.

Welcome to follow my wechat official account — the growth of algorithmic ape, or scan the QR code below, we can communicate, learn and progress together!