How to play the TensorFlow Playground

I played TensorFlow Playground a long time ago, but I only had a simple concept of neural network at that time, and did not understand the meaning of various parameters. I randomly clicked them again, and failed to classify the spiral shaped data set after many attempts.

Now Andrew Ng has studied half of the course (students who are interested in the previous Ng course notes can visit linear regression, logistic regression and neural network), and have played it on the basis of understanding, and indeed have different experiences. And since TensorFlow Playground is open source (the code is neat and clean), you can take the source code back and play it your way

So~, my play is to modify the TensorFlow Playground to review the content of Ng’s previous courses. The following is the play reference. What interesting gameplay also please leave a message to tell me ha (● – ●) ~

Linear regression

  • Connect all the features directly to the OUTPUT
  • When the problem type is Regression, the default activation function of OUTPUT uses Linear (see the activation function section), so the OUTPUT of the entire network is the same as the prediction function of Linear Regression
  • The learning algorithm used by the neural network is back propagation, and as you can see from the chain rule, when we connect the network this way, the weights update in the same way as linear regression
  • You can add higher-order polynomial features to enhance network learning (remember to include them in the serialization section of your code so that your selection status is recorded and you don’t have to re-select features each time)
  • Standardization, Standardization of features (be sure to do Standardization, otherwise your loss function will fly away instantly and you will never make it to the bottom)

Logistic regression

  • Connect all the features directly to the OUTPUT
  • When the problem type is Classification, the default activation function for OUTPUT is Tanh, so we will change it to Sigmoid so that the OUTPUT of the entire network is the same as the prediction function of logistic regression
  • Change the original test set results 1 and -1 to 1 and 0
  • Add a new loss function, so that after the chain rule, the weight update mode is the same as the logistic regression
  • Add higher degree polynomial features
  • Standardization, the Standardization of features
  • Change the judgment value of the decision boundary from 0 to 0.5 (because the network output has become 0 to 1)

The neural network

  • Try different Activation functions according to the Activation Function entry in the Wiki
  • Modify the layer limit of the network
  • Modify the maximum number of neurons at each layer

Tips

  • The weights of connecting neurons can be manually modified
  • Bais can be manually observed and adjusted at the lower left corner of each neuron in the hidden layer
  • Hover over the neuron to see its output on the right
  • A single step helps you debug your modified code (output some intermediate values)
  • The learning curve can help you determine whether your model has high bias or high variance problems
  • L1 or L2 regularization can be used if the weights are not too large for overfitting


hertzcat

2018-04-27