background

TensorFlow diagram and model loading and storage have been written in detail before, but some people may not understand, so attached is an example about model loading and storage,CODE is I happened to see, so I wrote it down. The models are clever and much easier to write than numpy before, which helps focus on loading and storing the models.

parsing

Create a class to save the file :saver = tf.train.saver ()

Saver = tb.train.saver (), a common class that saves models, graphs, and data, has its internal structure explained in detail in the source code, which has been explained in previous articles, but this time only, how do we use the method specifically

Saver. The save ()

Source structure

def save(self, sess, save_path, global_step=None, latest_filename=None, meta_graph_suffix="meta", Write_meta_graph =True, write_state=True): # saver = tf.train.Saver() # saver.save(sess, checkpoint_dir + 'model55.ckpt', Global_step = I +1) # note that model55.ckpt will be saved as multiple filesCopy the code

Common parameters:

1. Sess: session to be saved

2. Save_path: save path. Note that if you want to save it in the code directory, do not add ‘/’ in front of it, otherwise it will become the root directory

3. Global_step: This parameter is used for multiple iterations and is saved according to the preceding steps

4. Save the following files: -50 and 100 are saved according to global_step

call

Source structure

Def restore(self, sess, save_path): Save_path = checkpoint_DIR + 'model55.ckpt' # Code instance: Saver.restore (sess, ckpt.model_checkpoint_path)Copy the code
  1. Saver.restore () will restore the graphs, parameters, and so on in the original session. If you pass in a folder with multiple model. CKPT file groups, the last saved CKPT file group is called by default.
  2. CKPT file groups are sorted as follows: If the file groups are sorted by step, the last saved step is the latest. If the file groups are sorted by time, the same is true

CKPT file

I’ve written about it before in the original article and it’s worth reposting it here

The TensorFlow model is stored in a file with the suffix. CKPT. Three files appear in the save folder, because TensorFlow keeps the structure of the computed graph separate from the parameter values on the graph.

The checkpoint file holds a list of all model files in a directory that is automatically generated and maintained by the tf.train.Saver class. The file names of all TensorFlow model files persisted by a tf.train.saver class are maintained in the checkpoint file. When a saved TensorFlow model file is deleted, the file name corresponding to the model is also deleted from the checkpoint file. The checkpoint content is in the format of CheckpointState Protocol Buffer.

The model.ckpt. Meta file preserves the structure of the TensorFlow calculation graph, which can be understood as the network structure of the neural network. TensorFlow records the information of the nodes in the calculation graph and the metadata needed to run the nodes in the calculation graph through MetaGraph. In TensorFlow, MetaGraphDef Protocol Buffer defines MetaGraphDef MetaGraphDef. The contents of MetaGraphDef form the first file for TensorFlow persistence. The default MetaGraphDef file name is.meta, and the file model.ckpt. Meta stores MetaGraphDef data.

The model. CKPT file stores the values of each variable in TensorFlow. This file is stored in the SSTable format, which can be roughly regarded as a (key, value) list. The first line of the list in the model.ckpt file describes the file’s meta-information, such as the list of variables stored in the file. Each remaining line of the list holds a fragment of a variable whose information is defined by the SavedSlice Protocol Buffer. The SavedSlice type holds the name of the variable, information about the current fragment, and the value of the variable. TensorFlow provides tf. Train. NewCheckpointReader class to check the model. The CKPT file stored in the variable information. How to use the tf. Train. NewCheckpointReader class here don’t do that, please check list.

CODE AND RUN

import tensorflow as tf import numpy as np import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' x = Tf. placeholder(tF. float32, shape=[None, 1]) # y y = 4 * x + 4 w = tF. Variable(tF. random_normal([1], -1, 1) 1)) b = tf.Variable(tf.zeros([1])) y_predict = w * x + b loss = tf.reduce_mean(tf.square(y - y_predict)) optimizer = Tf. Train. GradientDescentOptimizer (0.5) "train" = optimizer. Minimize (loss) isTrain = False train_steps = 100 checkpoint_steps = 50 checkpoint_dir = 'save/' saver = tf.train.Saver() # defaults to saving all variables - in this case w and b x_data = np.reshape(np.random.rand(10).astype(np.float32), (10, 1)) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) if isTrain: for i in range(train_steps): sess.run(train, feed_dict={x: x_data}) if (i + 1) % checkpoint_steps == 0: saver.save(sess, checkpoint_dir + 'model55.ckpt', Global_step = I +1 print(sess.run(w)) print(sess.run(b)) CKPT = tf.train.get_checkpoint_state(checkpoint_dir) if CKPT and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) else: Pass print(sess.run(w)) print(sess.run(b)) "[3.994277] [4.00329876]"Copy the code

The last

For more details, please click here