preface

Recently, I have been learning Adaptive Style Transfer and carrying out engineering practice. By the way, I have summarized Encoder-Decoder Architecture in deep learning.

The body of the

Encoder-Decoder is a very common model framework in deep learning. An Encoder is a network that receives input and outputs feature vectors (FC, CNN, RNN, etc.). These eigenvectors are just another representation of the input’s features and information.

Encoding is really just another representation of content

Decoder is also a network (usually the same network structure as the encoder, but in the opposite direction) that takes feature vectors from the encoder and outputs results closest to the actual input or expected output, as shown below

To be exact, Encoder-Decoder is not a specific model, but a kind of framework. Encoder and Decoder parts can be any text, voice, image, video data, model can use CNN, RNN, BiRNN, LSTM, GRU and so on. So based on Encoder-Decoder, we can design a variety of application algorithms.

Encoder uses decoder for training and has no label (unsupervised). The loss function contains the difference between actual input and induced input (delta).

Once trained, the Encoder gives the input a feature vector that the decoder can use to reconstruct the input.

The technology is used in a variety of different applications, from translation to generative models.

Usually, however, applications do not reconstruct the original input, but map/translate/associate the input to a specific output. For example, translating French into English.

Concrete example

Autoencoder

Autoencoder neural network is an unsupervised machine learning algorithm with three layers: input layer, hidden layer (encoding layer) and decoding layer. The purpose of the network is to reconstruct its input so that its hidden layer learns a good representation of that input. It applies back propagation to set the target value to be equal to the input value. Autoencoder is a kind of Unsupervised pretraining Networks. Its structure is shown in the figure below:

[image: deep learning: the heart of the automatic encoder | base and type]

CNN

In a CNN, an encoder-decoder network typically looks like this (a CNN encoder and a CNN decoder):

RNN

In an RNN, an encoder-decoder network typically looks like this (an RNN encoder and an RNN decoder):

Adaptive Style transfer

Afterword.

The resources

Chinese article a directional blog.csdn.net/xbinworld/a…

The heart of the machine www.jiqizhixin.com/graph/techn…

What is an Encoder/Decoder in Deep Learning? www.quora.com/What-is-an-…

Is there a difference between autoencoders and encoder-decoder in deep learning? www.quora.com/Is-there-a-…