This is the 11th day of my participation in the August More text Challenge. For details, see: August More Text Challenge

Deep Learning with Python

This article is one of a series of notes I wrote while studying Deep Learning with Python (2nd edition, by Francois Chollet). This post marks the turn from Jupyter Notebooks to Markdown, as you can check out the original.ipynb notebooks at GitHub or Gitee.

You can read the original copy of the book online (in English) at this website. The book’s author also gave the accompanying Jupyter notebooks.

This is one of the notes from Chapter 7. Advanced Deep-Learning Best Practices.

7.2 Inspecting and monitoring deep-learning models using Keras callbacks and TensorBoard

Use the Keras callback function and TensorBoard to examine and monitor the deep learning model

After starting a complex training mission with Model.fit (), we have to wait until it’s over without knowing if it’s working properly or controlling it. It’s like throwing a paper airplane into the wind and sending it off into an uncertain distance. Rather than an uncontrolled paper airplane like this, we might want an intelligent drone that can sense the environment, send data back to us, and navigate autonomously based on its current state. Tools like Keras’s callbacks and TensorBoard can help turn a paper airplane into a smart drone.

The training applies a callback function to the model

When we train the model, we don’t know how many rounds to run at the beginning, we can only make it run enough rounds, and then manually find an optimal number of rounds to train the model again with this optimal number of rounds, which is quite time-consuming. Therefore, we prefer to automatically stop training when the model observes that the validation loss is no longer improving.

This can be done using the Keras callback function: Keras provides a number of useful callbacks, which are placed in the Keras. Callbacks, and automatically stopping training is just one of them.

Callback is called by the model at various points in the training process. It has access to the state of the model and can take actions such as:

  • Model checkpoint: Saves the current weight of the model at different points in the training process
  • Early termination: Training is interrupted to verify that losses are no longer improving
  • Dynamically adjust parameter values: for example, dynamically adjust the learning rate of the optimizer
  • Record training metrics and validation metrics: These metrics can be used to visualize the representations learned by the model
  • .

The use of the callback

There are many useful callbacks built into Keras, such as:

  • ModelCheckpoint: Saves the model trained to certain states during the training process. Can be used to continuously save the model, can also be used to selectively save the current best model;
  • EarlyStopping: monitoring target index, if no improvement is made within the set number of rounds, the training will be interrupted;
  • ReduceLROnPlateau: When the verification loss is no longer improved (when loss plateau is encountered), the learning rate is reduced.

The use of these callbacks is simple:

from tensorflow import keras

callbacks_list = [
    Save weights after each round
    keras.callbacks.ModelCheckpoint(
        filepath='my_model.h5'.# Save the file path
        monitor='val_loss'.# monitor: The metric to verify
        save_best_only=True.# Save only models that make monitor metrics the best (not if Monitor doesn't improve)
    ),
    Interrupt training when no longer improving
    keras.callbacks.EarlyStopping(
        monitor='acc'.# Index to verify
        patience=10.# If monitor does not improve in more than patience, interrupt training
    ),
    # Drop learning rate when no longer improving
    keras.callbacks.ReduceLROnPlateau(
        monitor='val_loss'.# Index to verify
        factor=0.1.# Trigger: Learning rate *= factor
        patience=5.# Monitor did not improve in patience, triggering a reduction in learning rate
    ),
]

model.compile(optimizer='rmsprop', 
              loss='binary_crossentropy', 
              metrics=['acc'])    # Acc = acc = acc = ACC = ACC

model.fit(x, y, 
          epochs=10, 
          batch_size=32, 
          callbacks=callbacks_list,     Use these callbacks for training
          validation_data=(x_val, y_val))  # callback uses val, so it must be there
Copy the code

Write your own callback function

Instead of using Keras’s built-in callbacks, you can write your own callbacks to do things that aren’t built-in.

Write your own callbacks by creating a subclass of keras.callbacks.Callback. Much like writing game scripts, we implement methods in this subclass that are then called at specific points during training:

methods The time of the call
on_epoch_begin Called at the beginning of each round
on_epoch_end Called at the end of each round
on_batch_begin Is called before each batch is processed
on_batch_end Is called after each batch is processed
on_train_begin Called at the beginning of training
on_train_end Called at the end of the training

These methods take a logs parameter (dict type) that contains information about the previous epoch or batch or train, including training metrics, validation metrics, and so on.

Among these methods, you can also access:

  • self.model: The model instance that calls the callback;
  • self.validation_data: fit incoming validation data;

For example, we write a custom callback function to save the activation computed value of each layer of the model for the first sample of the validation set at the end of each round:

from tensorflow import keras
import numpy as np

class ActivationLogger(keras.callbacks.Callback) :
    def set_model(self, model) :  # Called by the parent model before training, telling the callback which model is calling it
        self.model = model
        layer_outputs = [layer.output for layer in model.layers]
        self.activations_model = keras.models.Model(model.input, layer_outputs)  # Model instance to return the activation of each layer
        
    def on_epoch_end(self, epoch, logs=None) :
        if self.validation_data is None:
            raise RuntimeError('Requires validation_data.')
        validation_sample = self.validation_data[0] [0:1]
        activations = self.activations_model.predict(validation_sample)
        with open(f'activations_at_epoch_{epoch}.npz'.'w') as f:
            np.savez(f, activations)
Copy the code

Introduction to TensorBoard: A visualization framework for TensorFlow

In order to make better models, in addition to thinking about architecture and writing code, we need to get information about the model, understand what’s going on inside the model during training, and use that information to know how we can think about and optimize the model.

The thinking is done in your head, the code to write the model can be easily implemented using the Keras API, and the understanding of the model can be borrowed from the TensorBoard. TensorBoard is a browser-based visualization tool built into TensorFlow that allows you to visually monitor what’s happening inside your model during training.

The TensorBoard has the following functions:

  • Monitor indicators visually during training
  • Visualize the model architecture
  • Visualizes the activation and gradient histograms
  • Study embedding in three dimensions

We demonstrate the use of TensorBoard by training a one-dimensional convolutional neural network on an IMDB sentiment analysis task:

from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence

max_features = 2000
max_len = 500

(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
x_train = sequence.pad_sequences(x_train, maxlen=max_len)
x_test = sequence.pad_sequences(x_test, maxlen=max_len)

model = keras.models.Sequential()
model.add(layers.Embedding(max_features, 128,
                           input_length=max_len,
                           name='embed'))
model.add(layers.Conv1D(32.7, activation='relu'))
model.add(layers.MaxPool1D(5))
model.add(layers.Conv1D(32.7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())

model.add(layers.Dense(1))

model.summary()

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['acc'])
Copy the code

To use the TensorBoard, you need to do some preparation before you start training. First, create a directory for the log files needed by the TensorBoard and start the TensorBoard service. In the shell:

$ mkdir my_log_dir
Copy the code

Or, in the Jupyter Notebook:

%mkdir my_log_dir
Copy the code

Then, instantiate a TensorBoard callback function:

import tensorflow as tf

tensorboard_callback = tf.keras.callbacks.TensorBoard(
    log_dir='my_log_dir'.# Location where log files are stored
    histogram_freq=1.# Each histogram_freq round is recorded after the activation histogram
    embeddings_freq=1.# Each histogram_freq round after the record word is embedded
)
Copy the code

Finally, use this callback for training:

history = model.fit(x_train, y_train, 
                    epochs=20, 
                    batch_size=128, 
                    validation_split=0.2, 
                    callbacks=[tensorboard_callback])
Copy the code
Epoch 1/20 157/157 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 25 156 ms/s step - loss: 0.6376 acc: 0.6424 - val_loss: 0.7053-VAL_acc: 0.7210...Copy the code

Once you’ve started your training (not until it’s finished), you can start the TensorBoard service:

$ tensorboard --logdir=my_log_dir
Copy the code

Or in Jupyter Notebook:

%load_ext tensorboard
%tensorboard --logdir=my_log_dir
Copy the code

Now you can go to http://localhost:6006 in your browser to see the TensorBoard’s visual model training process.

  • In the Scalars TAB, you can see the curve of accuracy and loss during the training, which is the same as what we drew with PLT after each training, but you can refresh it at any time in the TensorBoard without waiting until the training is complete.
  • Graph TAB shows the visualization of the underlying TensorFlow operation Graph behind Keras model. This underlying operation Graph is more complex than our Keras model. This is what Keras simplifies for us, and Keras lets us not touch those complicated things. Make the workflow very simple; If you want to see a graphical representation of the Keras model itself, you can usekeras.utils.plot_model:
import tensorflow as tf

tf.keras.utils.plot_model(model, show_shapes=True, to_file='model.png')
# show_shapes=True displays the input and output tensors for each layer
Copy the code

  • In the Histograms TAB, there are Histograms of activation values for each layer;
  • In the Projector TAB, there are word embedding spatial relationships for the 2000 words in our glossary. This is the “projection” image obtained after the 128-dimensional Embedding space learned by Embedding layer is reduced to 2-dimensional or 3-dimensional by using algorithms such as PCA. If you are interested in the meaning of each dot, you can click on a dot, see its number, and then use the following code to restore the word to see it:
index_word = {v: k for k, v in imdb.get_word_index().items()}
def show_word_of_index(idx) :  # idx Enter the word number you see
    print(index_word[idx])

show_word_of_index(123)
Copy the code

A word, such as ever, is then printed.