Google Development Days China 2018 was held recently in China. It’s a pity that xiaobian was stuck in Hefei due to non-resistant factors and could not attend. However, xiaobian’s friends were lucky enough to attend the meeting and brought first-hand information about TensorlFOW. Here’s a look at the best use of TensorFlow in a production environment.

Google Brain software engineer Yifei Feng gave us an excellent presentation entitled “Prototyping, Training, and Production Engagement with Tensorflow high-level APIS”.

Sister Feng Yifei told us some changes of tensorflwo’s new API, and most importantly, put forward some suggestions on using TensorFlow.

Six aspects are summarized, respectively:
  • Build the prototype with Eager mode

  • Use Datasets to process data

  • Feature Columns are used to extract features

  • Build the model with Keras

  • Use Canned Estimators

  • Package the model with SavedModel

Let’s take a look at each of these six areas in turn.

Build the prototype with Eager mode

As a member of the computer community, we know that static graph efficiency is naturally fast, but the use of dynamic graph for our use to bring a lot of convenience. In ’17, framework dynamics were all the rage, so Google contrived tf.contrib.eager to meet the challenge.

What are the benefits of using Eager? If we want to debug tensorFlow, we have to use sess.run(). If we want to debug tensorFlow, we can print variables directly. And there’s more to it than that, because when you model things, you have to think about Tensor’s shape very carefully, and it’s very difficult to locate errors. If Eager is used to build the network structure, shape can be printed to confirm whether it is correct. This makes it much faster for us to build networks; In addition, it is more convenient to customize Operation and Gradient by using Eager.

Here’s a simple little example. First install Eager using PIP install tF-nightly (or the GPU version PIP Install tF-CNG-GPU).

import tensorflow as tfimport tensorflow.contrib.eager as tfetfe.enable_eager_execution() A =tf. constant([5], dtype=tf.int32)for I in range(a): print (I)
Copy the code

With Eager, we can execute the above code smoothly. But if you don’t have an Eager, you’ll get a Tensor object that can’t be interpreted as an integer error. On the downside, the introduction of Eager is bound to cause additional costs.

Use Datasets to process data

Tensorflow data can be read in three ways: feeding; By means of pipelines; Read data stored in a variable or constant directly. Datasets is the second approach, which simplifies data entry and improves data reading efficiency.

The composition of Datasets is shown above. Among them:

  • Dataset: the basis for creating and converting datasets;

  • TextLineDataset: Reads lines from a text file;

  • TFRecordDataset: Read the TFRecord file;

  • FixedLengthRecordDataset: Reads fixed-size records from a binary file;

  • Iterator: Provides a way to access elements of a data set one at a time.

For Datasets, we can use methods provided by subclasses of Datasets, or directly use methods from the base class: tf.data.dataset.from_tensors () or tF.data.dataset.from_tensor_slices ().

Feature Columns are used to extract features

Feature Columns are actually a data structure used to describe features. Feature Columns can be used to process the features before the input training model conveniently. For example, for iris recognition, for input data, each column represents different features, such as petal length, calyx length, etc. We want to process different Columns separately (or all Columns), which can be easily achieved by using Feature Columns.

As shown in the figure above, Feature Columns form a structural description of the input data set. This makes it easier to process each column of data and makes the code more readable.

Build the model with Keras

You already know about Keras. Using Keras to build a neural network is flying fast and perfectly compatible with TensorFlow.

simple_model=Sequential()simple_model.add(Dense(3,input_shape=(x.shape[1],),activation='relu',name='layer1'))simple_model.add(Dense(5,activation='relu',name='layer2'))simple_model.add(Dense(1,activation='sigmoid',name='layer3'))
Copy the code

Building a model is as simple as the above, and calling the model defined in the API takes only one sentence, which is extremely convenient.

Use Canned Estimators

The Estimators API provides model selection, evaluation, training, and more. After version 1.3, Google added another layer, called Canned Estimators. It only takes one line of code to create a depth model. Estimators can be used in conjunction with the Feature Columns mentioned above.

Tf.estimator.Estimator is the base class; Pre-made Estimators are subclasses of the base class. They are defined models that we can use directly. Custom Estimators are real columns of the base class and are not defined, so we need to implement the definition of the model ourselves.

The model here consists of three parts:

  • Input function: an Input function, called a Datasets, that represents data;

  • Model function: training, validation and testing of experimental models and monitoring of Model parameters;

  • Estimators: Control the flow of data and various operations to the model.

Package the model with SavedModel

SavedModel provides a better way to deploy models to a build environment than tensorFlow’s original tF.Train.Saver saves models, and is more suitable for business purposes.

As shown in the lower right part of the figure above, two types of models can be generated when a SavedModel is used to package a model:

Corresponding to the first Model, Tensorflow Model Analysis can facilitate us to analyze the Model, whether there are problems with parameters, or the Model is not properly designed, etc. After analysis and feeling that the model is working well, we can deploy using Tensorflow Serving.

In addition, compared with Saver’s method, we do not need to redefine Graph (model) in the inference process. If Saver is used, we need to redefine the model when using the model. If Saver is designed and used by a programmer, it is ok. He doesn’t know about the tensor of the model, that would be awkward. So using SavedModel makes it easier to use the model.

conclusion

Google Developer Days is a feast for us, and we hope to learn along with you. Please give this article a thumbs up if you can. They say like gets you into Google.

To read more

Pros and cons of React – Native technology

Here’s how to read the difference between the JVM and Dalvik

NDK project actual combat – high imitation 360 mobile phone assistant uninstall monitoring

(Android) Interview Questions

Untrained Programmers: How to find career resources and get into good companies?

Believe in yourself, there is nothing impossible, only unexpected

It’s not just technology that’s gained here!