When using TensorFlow, sometimes we need to load more than one model. How do we load multiple models?

Original text: bretahajek.com/2017/04/imp…


A lot can be said about TensorFlow. But this time I’ll just show you how to import the trained model (figure), because I can’t import the second model and use it with the first. Also, the import is very slow, and I don’t want to do it twice. On the other hand, it’s not practical to put everything into one model.

In this tutorial, I’ll show you how to save and load models and, more importantly, how to load multiple models.

Load the TensorFlow model

Before we introduce load multiple model, we first introduce how to load a single model, the official document: www.tensorflow.org/programmers…

First, we need to create a model, train and save it. I won’t go into too much detail here, just focus on how to save the model and don’t forget to name each operation.

Create a model, train and save the following code:

import tensorflow as tf
## Linear Regression ###
# Input placeholders
x = tf.placeholder(tf.float32, name='x')
y = tf.placeholder(tf.float32, name='y')
# Model parameters define the weight parameters of the ModelW2 = tf.variable ([0.1], tf.float32) W3 = tf.variable ([0.1], tf.float32) W3 = tf.variable ([0.1], Tf.float32) b = tf.variable ([0.1], tF.float32)Output The Output of the model
linear_model = tf.identity(W1 * x + W2 * x**2 + W3 * x**3 + b,
                           name='activation_opt')

# Loss defines the Loss function
loss = tf.reduce_sum(tf.square(linear_model - y), name='loss')
# Optimizer and training step defines Optimizer operationsOptimizer = tf.train.AdamOptimizer(0.001) train = optimizer. Minimize (loss, name='train_step')

# Remember output operation for later aplication
# Adding it to a collections for easy acces
# This is not required if you NAME your output operation
Remember to add the output operation to a collection, but how do you name the output operation, this step can be omitted
tf.add_to_collection("activation", linear_model)

## Start the session ##
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# CREATE SAVER
saver = tf.train.Saver()

# Training loop
for i in range(10000):
    sess.run(train, {x: data, y: expected})
    if i % 1000 == 0:
        # You can also save checkpoints using global_step variable
        saver.save(sess, "models/model_name", global_step=i)

# SAVE TensorFlow graph into path models/model_name
Save the model to the specified path and name the model file
saver.save(sess, "models/model_name")
Copy the code

Notice that this is the first important point — naming variables and operations. This is so that you can use the specified weight parameters when the model is loaded, the variables are automatically named like “Placeholder_1” if they are not named. In more complex models, it’s a good idea to use scopes, but I won’t expand them here.

Anyway, the point of == is that in order to be able to call weights or operations when the model is loaded, you have to name them or put them in a collection. = =

When you save the model, you should include these files: model_name. Index, model_name. Meta, and others in the folder specified to save the model. If is to use checkpoints suffix name model name, will there be any file name contains model_name – 1000, the number is global_step corresponding variables, namely the current training iterations.

Now we are ready to load the model. Loading the model is simple, all we need are two functions: tf.train.import_meta_graph and saver.restore(). In addition, provide the correct model save path location. Also, if we want to use the model on a different machine, we need to set the parameter clear_device=True.

We can then call the saved operation or weight argument with the name of the previously named or saved collection. If a domain is used, you also need to include the name of the domain. An error occurs when an input placeholder like {‘PlaceholderName:0’: data} is required.

The code for loading the model is as follows:

sess = tf.Session()

# Import graph from the path and recover session
Load the model and resume the session
saver = tf.train.import_meta_graph('models/model_name.meta', clear_devices=True)
saver.restore(sess, 'models/model_name')

# There are TWO options how to access the operation (choose one)
Either method can be used to call the specified operation
  # FROM SAVED COLLECTION: called FROM a SAVED COLLECTION
activation = tf.get_collection('activation') [0]# BY NAME: use the naming method
activation = tf.get_default_graph.get_operation_by_name('activation_opt').outputs[0]

# Use imported graph for data
# You have to feed data as {'x:0': data}
# Don't forget on ':0' part!
Work with the loaded model and don't forget to enter placeholders
data = 50
result = sess.run(activation, {'x:0': data})
print(result)
Copy the code

Multiple models

The above described how to load a single model operation, but how to load multiple models?

If you load multiple models the way you load a single model, you get conflicting variables and it doesn’t work. The reason for this problem is a default diagram. The conflict occurs because we load all variables into the default graph that the current session takes. When we use a Session, we can specify using a different graph created by tf.session (graph=MyGraph). Therefore, if we want to load multiple models, all we need to do is load them in different diagrams and use them in different sessions.

Here, a custom class is used to load the model of the specified path into a local graph. This class also provides a run function to manipulate the input data using the loaded model. This class is useful to me because I always put the model output into a collection or name it activation_opt and name the input placeholder X. You can modify and extend this class according to your actual application needs.

The code is as follows:

import tensorflow as tf

class ImportGraph():
    """ Importing and running isolated TF graph """
    def __init__(self, loc):
        # Create local graph and use it in the session
        self.graph = tf.Graph()
        self.sess = tf.Session(graph=self.graph)
        with self.graph.as_default():
            # Import saved model from location 'loc' into local graph
            Load the model from the specified path into the local diagram
            saver = tf.train.import_meta_graph(loc + '.meta',
                                               clear_devices=True)
            saver.restore(self.sess, loc)
            # There are TWO options how to get activation operation:
            There are two ways to call operations or arguments
              # FROM SAVED COLLECTION:            
            self.activation = tf.get_collection('activation') [0]# BY NAME:
            self.activation = self.graph.get_operation_by_name('activation_opt').outputs[0]

    def run(self, data):
        """ Running the activation operation previously imported """
        # The 'x' corresponds to name of input placeholder
        return self.sess.run(self.activation, feed_dict={"x:0": data})
      
      
### Using the class ###
# Test sample
data = 50         # random data
model = ImportGraph('models/model_name')
result = model.run(data)
print(result)
Copy the code

conclusion

Loading multiple models is not too difficult if you understand the mechanics of TensorFlow. The solution above may not be perfect, but it is simple and fast. Finally, a sample code to summarize the whole process is shown on Jupyter Notebook. The code address is as follows:

Gist.github.com/Breta01/f20…


Finally, github addresses for several code examples in this article:

  1. Code for creating, training and saving TensorFlow model.
  2. Importing and using TensorFlow graph (model)
  3. Class for importing multiple TensorFlow graphs.
  4. Example of importing multiple TensorFlow modules

Welcome to follow my wechat official account – Machine Learning and Computer Vision or scan the qr code below, leave a message in the background, and share your suggestions and opinions with me, correct the mistakes that may exist in the article, and communicate with me, learn and make progress!

Recommended reading

1. Introduction to Machine Learning (1)

2. Introduction to Machine Learning (2)

3. Getting to know GAN for the first time

4. GAN Learning Series 2: The Origin of GAN

5. TFGAN, Google’s open source GAN library