Neural network (NN) is a distributed parallel information processing algorithm mathematical model that imitates the behavior characteristics of animal neural network. Depending on the complexity of the system, the network can process information by adjusting the relationship between a large number of internal nodes. If the amount of data provided is large enough, the neural network can fit any function relation between input and output.

Tensorflow is an excellent deep learning framework.

This paper shares the prediction of stock market using Tensorflow neural network

 


1. Data sources

Find a group of stock data above all, data can crawler on the network, Oriental wealth, great wisdom has. See previous articles for crawler methods.

Date = np.linspace(1, 30, 30) #beginPrice = Np.array ([2923.19, 2928.06, 2943.92, 2946.26, 2944.40, 2920.85, 2861.33, 2854.58, 2776.69, 2789.02, 2784.18, 2805.59, 2781.98, 2798.05, 2824.49, 2762.34, 2817.57, 2835.52, 2879.08, 2875.47, 2887.66, 2885.15, 2851.02, 2879.52, 2901.63, 2896.00, 2907.38, 2886.94, 2925.94, 2927.75])endPrice = np. Array ([2937.36, 2944.54, 2941.01, 2952.34, 2932.51, 2908.77, 2867.84, 2821.50, 2777.56, 2768.68, 2794.55, 2774.75, 2814.99, 2797.26, 2808.91, 2815.80, 2823.82, 2883.10, 2880.00, 2880.33, 2883.44, 2897.43, 2863.57, 2902.19, 2893.76, 2890.92, 2886.24, 2924.11, 2930.15, 2957.41])Copy the code

2. Data presentation

Based on the Matplotlib visualization library, a 30-row and 2-column matrix is established to store stock data. The first column of the matrix is the stock opening price, and the second column is the stock closing price. If the stock closing price is higher than the opening price, it will be displayed in red, and vice versa, it will be displayed in green.

for i in range(0, 30): DateOne = np.zeros([2]) dateOne[0] = I dateOne[1] = I priceOne = Np.zeros ([2]) priceOne[0] = beginPrice[I] priceOne[1] = endPrice[i] if endPrice[i] > beginPrice[i]: plt.plot(dateOne, priceOne, 'r', lw=6) else: plt.plot(dateOne, priceOne, 'g', lw=6)plt.xlabel("date")plt.ylabel("price")plt.show()Copy the code

 


3. Tensorflow prediction

Based on Tensorflow neural network framework, a three-layer neural network is designed, in which the hidden layer includes 25 nodes. The designed neural network is used to predict the stock closing price.

DateNormal = np.zeros([30, 1])priceNormal = np.zeros([30, 1])# normalize for I in range(0, 30): PriceNormal [I, 0] = priceNormal[I, 0] = priceNormal[I, 0] = endPrice[I] / 3000.0x = tf.placeholder(tf.placeholder, [None, 1])y = tf.placeholder(tf.float32, [None, 1])# X->hidden_layerw1 = tf.Variable(tf.random_uniform([1, 25], 0, 1))b1 = tf.Variable(tf.zeros([1, 25]))wb1 = tf.matmul(x, W1) + b1Layer1 = tf.nn.relu(wb1) # hidden_layer->outputw2 = tf.variable (tf.random_uniform([25, 1], 0, 1))b2 = tf.Variable(tf.zeros([30, 1]))wb2 = tf.matmul(layer1, Relu (wB2) Loss = tf.reduce_mean(tf.square(y-layer2)) # y = true data, Layer2 for network prediction # gradient descent train_step = tf. Train. GradientDescentOptimizer (0.1). Minimize (loss) with tf. The Session () as sess: sess.run(tf.global_variables_initializer()) for i in range(0, 20000): sess.run(train_step, feed_dict={x: dateNormal, y: PriceNormal}) # priceNormal}, X w1w2b1b2 -->layer2 pred = sess.run(layer2, feed_dict={X: dateNormal}) date1 = np.linspace(0, 29, 30) # plt.plot(date1, pred*3000, 'b', lw=3)plt.show()Copy the code

 

The prediction results of the visualized neural network running the above code are shown in the figure below

 

The complete code is as follows:

import numpy as npimport matplotlib.pyplot as pltimport tensorflow as tf# import tensorflow.compat.v1 as tf# Date = np.linspace(1, 30, 30) #beginPrice = np.array([2923.19, 2928.06, 2943.92, 2946.26, 2944.40, 2920.85, 2861.33, 2854.58, 2776.69, 2789.02, 2784.18, 2805.59, 2781.98, 2798.05, 2824.49, 2762.34, 2817.57, 2835.52, 2879.08, 2875.47, 2887.66, 2885.15, 2851.02, 2879.52, 2901.63, 2896.00, 2907.38, Array ([2937.36, 2944.54, 2941.01, 2952.34, 2932.51, 2908.77, 2867.84, 2821.50, 2777.56, 2768.68, 2794.55, 2774.75, 2814.99, 2797.26, 2808.91, 2815.80, 2823.82, 2883.10, 2880.00, 2880.33, 2883.44, 2897.43, 2863.57, 2902.19, 2893.76, 2890.92, 2886.24, 2924.11, 2930.15, 2957.41])for I in range(0, 30): DateOne = np.zeros([2]) dateOne[0] = I dateOne[1] = I priceOne = Np.zeros ([2]) priceOne[0] = beginPrice[I] priceOne[1] = endPrice[i] if endPrice[i] > beginPrice[i]: plt.plot(dateOne, priceOne, 'r', lw=6) else: plt.plot(dateOne, priceOne, 'g', lw=6)plt.xlabel("date")plt.ylabel("price")# plt.show()dateNormal = np.zeros([30, 1])priceNormal = np.zeros([30, 1])# normalize for I in range(0, 30): PriceNormal [I, 0] = priceNormal[I, 0] = priceNormal[I, 0] = endPrice[I] / 3000.0x = tf.placeholder(tf.placeholder, [None, 1])y = tf.placeholder(tf.float32, [None, 1])# X->hidden_layerw1 = tf.Variable(tf.random_uniform([1, 25], 0, 1))b1 = tf.Variable(tf.zeros([1, 25]))wb1 = tf.matmul(x, W1) + b1Layer1 = tf.nn.relu(wb1) # hidden_layer->outputw2 = tf.variable (tf.random_uniform([25, 1], 0, 1))b2 = tf.Variable(tf.zeros([30, 1]))wb2 = tf.matmul(layer1, Relu (wB2) Loss = tf.reduce_mean(tf.square(y-layer2)) # y = true data, Layer2 for network prediction # gradient descent train_step = tf. Train. GradientDescentOptimizer (0.1). Minimize (loss) with tf. The Session () as sess: sess.run(tf.global_variables_initializer()) for i in range(0, 20000): sess.run(train_step, feed_dict={x: dateNormal, y: PriceNormal}) # priceNormal}, X w1w2b1b2 -->layer2 pred = sess.run(layer2, feed_dict={X: dateNormal}) date1 = np.linspace(0, 29, 30) # plt.plot(date1, pred*3000, 'b', lw=3)plt.show()Copy the code

Numpy, Matplotlib and Tensorflow libraries are needed in the code. In order to improve the download speed, it is recommended to switch to domestic PIP sources, such as Douban and Tsinghua

pip install numpy -i https://pypi.tuna.tsinghua.edu.cn/simplepip install matplotlib -i https://pypi.tuna.tsinghua.edu.cn/simplepip install tensorflow -i https://pypi.tuna.tsinghua.edu.cn/simple
Copy the code