This is the 25th day of my participation in the More text Challenge. For more details, see more text Challenge

Genetic algorithm is widely used in machine learning today. It can not only replace the gradient descent algorithm to optimize parameters, but also can be used to search the parameters of the neural network in the parameter space. This article will look at how to use genetic algorithm to optimize the weight parameters in the Pytorch training process.

Step to summarize

  • Create the PyTorch model
  • Instantiate apygad.torchga.TorchGAclass
  • Prepare training data
  • Define the evaluation function
  • Instantiate apygad.GAclass
  • Running genetic algorithm

Create the PyTorch model

import torch

# Define a neural network that contains a hidden layer and then an output layer
input_layer = torch.nn.Linear(3.5)
relu_layer = torch.nn.ReLU()
output_layer = torch.nn.Linear(5.1)

model = torch.nn.Sequential(input_layer,
                            relu_layer,
                            output_layer)
Copy the code

pygad.torchga.TorchGA Class

The PyGad.Torchga module has a class called Torchga that is used to create an initial population for a genetic algorithm based on the PyTorch model. Constructors, methods, and properties in this class.

The Pygad.torchga. torchga class constructor takes the following arguments:

  • Model: Accepts an instance of the PyTorch model
  • Num_solutions: Number of solutions in the population. Each solution has different model parameters

Instance attributes

All parameters in the Pygad.torchga. torchga class constructor are used as instance attributes, and a new attribute named population_weights is added.

Here is a list of all instance properties:

  • model
  • num_solutions
  • Population_weights: A nested list of weights for all solutions in the population

Method in the TorchGA class

Let’s talk about the methods provided by instances of the Pygad.Torchga. torchga class

create_population()

The create_population() method creates the initial population of the genetic algorithm as a list of solutions, each representing a different model parameter. The network list is assigned to the instance’s population_weights attribute.

pygad.torchga.model_weights_as_vector()

The model_weights_as_vector() function takes only a single argument, model, of type PyTorch. Returns a vector containing all model weights. The reason for expressing model weights as vectors is that the genetic algorithm expects all parameters of any solution to be in one-dimensional vector form.

Parameter to a function.

  • model: PyTorch model

A one-dimensional vector that returns the weight of the model.

pygad.torch.model_weights_as_dict()

The model_weights_as_dict() function takes the following arguments.

  • Model: PyTorch model
  • Weights_vector: Model parameter in vector form

The value returned by this function is the same as the value returned by state_dict(), which is the weight of the model. The returned model can use the load_state_dict() method to load the parameters to the PyTorch model.

pygad.torchga.predict()

The predict() function makes a prediction based on a solution. Accept parameters

  • Model: PyTorch model.
  • The solution is an evolutionary solution
  • Data: Input of test data

Returns the predicted results of the data sample

With this example in mind, let’s review the following code step by step, following the general steps of training a Pytorch model by updating its parameters using a genetic algorithm mentioned in the previous share.

Create the Pytorch model

The first step is to create a PyTorch model. The following is to use the PyTorch API to create a simple PyTorch model. The model consists of two layers of neural network, a hidden layer and an output layer. There are 10 neurons that are weighted

import torch

input_layer = torch.nn.Linear(3.5)
relu_layer = torch.nn.ReLU()
output_layer = torch.nn.Linear(5.1)

model = torch.nn.Sequential(input_layer,
                            relu_layer,
                            output_layer)
Copy the code

Instantiate the Pygad.Torchga. torchga Class Class

The second step is to instantiate an instance of the Pygad.Torchga. torchga class, each population with 10 solutions corresponding to the above neural network 10 weight parameters.

import pygad.torchga

torch_ga = torchga.TorchGA(model=model,
                           num_solutions=10)
Copy the code

The torchga. Torchga constructor takes 2 parameters, respectively, which are the PyTorch model, the dimension of each individual vector, and the corresponding number of parameters to be learned by the model.

Prepare training data sets

The third step is to prepare the input (sample features) and output (sample labels) of the training data. There is an example with 4 samples. Each sample has 3 features as input and 1 as label.

import numpy

# Data inputs
data_inputs = numpy.array([[0.02.0.1.0.15],
                           [0.7.0.6.0.8],
                           [1.5.1.2.1.7],
                           [3.2.2.9.3.1]])

# Data outputs
data_outputs = numpy.array([[0.1],
                            [0.6],
                            [1.3],
                            [2.5]])

Copy the code

Define the evaluation function

The fourth step is to define the evaluation function, which must accept two parameters, the first individual (solution) and the individual’s position in the group. The next evaluation function calculates MAE(mean absolute error) for the PyTorch model based on the parameters in the solution. The reciprocal of MAE is returned as the evaluation value.

loss_function = torch.nn.L1Loss()

def fitness_func(solution, sol_idx) :
    global data_inputs, data_outputs, torch_ga, model, loss_function

    predictions = pygad.torchga.predict(model=model,
                                        solution=solution,
                                        data=data_inputs)

    abs_error = loss_function(predictions, data_outputs).detach().numpy() + 0.00000001

    solution_fitness = 1.0 / abs_error

    return solution_fitness

Copy the code
ga_instance.run()
Copy the code

After the execution of PyGAD is complete, the entire training process is visually represented in a diagram, which is displayed by calling the plot_fitness() method.

ga_instance.plot_fitness(title="PyGAD & PyTorch - Iteration vs. Fitness", linewidth=4)
Copy the code

The complete code

import torch
import pygad.torchga
import pygad

def fitness_func(solution, sol_idx) :
    global data_inputs, data_outputs, torch_ga, model, loss_function
    """ -model: model-solution: that is, the individual in the genetic algorithm population
    predictions = pygad.torchga.predict(model=model,
                                        solution=solution,
                                         data=data_inputs)
    # Calculation error
    abs_error = loss_function(predictions, data_outputs).detach().numpy() + 0.00000001

    # Because the bigger the estimate, the better
    solution_fitness = 1.0 / abs_error

    return solution_fitness

def callback_generation(ga_instance) :
    print("Generation = {generation}".format(generation=ga_instance.generations_completed))
    print("Fitness = {fitness}".format(fitness=ga_instance.best_solution()[1]))

# Create PyTorch model
input_layer = torch.nn.Linear(3.5)
relu_layer = torch.nn.ReLU()
output_layer = torch.nn.Linear(5.1)

# Define model
model = torch.nn.Sequential(input_layer,
                            relu_layer,
                            output_layer)

# Instantiate Pygad.torchga.torchga when initializing the population
torch_ga = pygad.torchga.TorchGA(model=model,
                           num_solutions=10)
# Define loss function
loss_function = torch.nn.L1Loss()

# Data set input data
data_inputs = torch.tensor([[0.02.0.1.0.15],
                            [0.7.0.6.0.8],
                            [1.5.1.2.1.7],
                            [3.2.2.9.3.1]])

# data set
data_outputs = torch.tensor([[0.1],
                             [0.6],
                             [1.3],
                             [2.5]])

num_generations = 250 # Number of iterations
num_parents_mating = 5 # Number of crossover, mutation, and mutation for each individual selected from the parent class
initial_population = torch_ga.population_weights # Initialize network weights

ga_instance = pygad.GA(num_generations=num_generations,
                       num_parents_mating=num_parents_mating,
                       initial_population=initial_population,
                       fitness_func=fitness_func,
                       on_generation=callback_generation)

ga_instance.run()

ga_instance.plot_fitness(title="PyGAD & PyTorch - Iteration vs. Fitness", linewidth=4)

Return the details of the optimal parameter
solution, solution_fitness, solution_idx = ga_instance.best_solution()
print("Fitness value of the best solution = {solution_fitness}".format(solution_fitness=solution_fitness))
print("Index of the best solution : {solution_idx}".format(solution_idx=solution_idx))

# Make predictions based on the best individuals
predictions = pygad.torchga.predict(model=model,
                                    solution=solution,
                                    data=data_inputs)
print("Predictions : \n", predictions.detach().numpy())

abs_error = loss_function(predictions, data_outputs)
print("Absolute Error : ", abs_error.detach().numpy())

# Here are the results

Fitness value of the best solution = 74.17345574365933 Index of the best solution: 0 [[0.09415314] [0.6399874] [1.2981992] [2.5062926]] Absolute error rate: 0.013481902 """
Copy the code