directory

Save the model

Load the first training model and train again

About the order in which compile and load_model() are used


Save the model

We take MNIST handwritten number recognition as an example

import numpy as np from keras.datasets import mnist from keras.utils import np_utils from keras.models import Sequential Optimizers import SGD # load data (x_train,y_train),(x_test,y_test) = Mnist.load_data () # print('x_shape:',x_train. Shape) # print('y_shape:',y_train. Shape) # print('x_shape:',x_train (60000,28)->(60000,784) X_Train = X_train.0 (x_train. Shape [0],-1)/255.0 X_test = 0 X_test.0 (x_test.shape[0],-1)/ 0 # Change one 0 0 0 0 0 0 0 0 Np_utils.to_categorical (y_test,num_classes=10) # create the model, input 784 neurons, Output 10 neuron model = Sequential([Dense(units=10, Input_DIM =784, Bias_Initializer ='one',activation='softmax')]) # define optimizer SGD = SGD(LR =0.2) # Define optimizer, Loss function, Model.compile (Optimizer = SGD, loss = 'mse', metrics=['accuracy'], (x_train,y_train,batch_size=64,epochs=5) # evaluate(x_test,y_test) Print ('\ntest loss',loss) print('accuracy',accuracy) # save model model.save('model.h5') # HDF5 file, PIP install H5PYCopy the code

Load the first training model and train again

import numpy as np from keras.datasets import mnist from keras.utils import np_utils from keras.models import Sequential Layers import Dense from keras. Optimizers import SGD from keras. Models import load_model # (x_test,y_test) = mnist.load_data() # (x_test,y_test) print('x_shape:',x_train.shape) # (x_test,y_test) Shape: print('y_shape:',y_train. Shape) # (600000,28,28)->(600000,784) x_train = x_train. Shape (x_train. Shape [0],-1)/255.0 0 0 = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Y_test = np_utils.to_categorical(y_test,num_classes=10) # Load model model = load_model('model.h5') # Assess model loss,accuracy = Model. Evaluate (x_test,y_test) print('\ntest loss',loss) print('accuracy',accuracy) # Model. Fit (x_train,y_train,batch_size=64,epochs=2) # Loss,accuracy = model. Evaluate (x_test,y_test) print('\ntest Loss ',loss) print('accuracy',accuracy) Load model.save_weights('my_model_weights.h5') model.load_weights('my_model_weights.h5') # save the network structure, Models import model_from_json json_string = model.to_json() model = model_from_json(json_string) print(json_string)Copy the code

About the order in which compile and load_model() are used

This paragraph mainly addresses whether we compile before, evaluate, and predict. So to figure that out, first of all we have to figure out what compile does in our program. What did you do?

What does Compile do?

Compile defines the loss Function, optimizer, and metrics metrics. It has nothing to do with weights, that is, compile does not affect weights, does not affect previous training problems.

Compile is required if we want to train the model or evaluate the model evaluate, because training uses loss functions and optimizers and evaluation uses metrics; If we want to predict, there is no need for the COMPILE model.

Do you need multiple compilations?

Unless we want to change one of them: loss function, optimizer/learning rate, measurement

Or we load a model that hasn’t been compiled yet. Or your load/save method doesn’t take into account previous compilations.

What happens if we compile again?

Optimizer state will be lost if the model is compiled again.

This means that your training will suffer a bit at the beginning until you adjust your learning rate, momentum, etc. But there is absolutely no damage to weight (unless your initial learning rate is so great that the first training step frantically changes the weight of the tweaks)

 


Reprinted from www.cnblogs.com/LXP-Never/p…