File:EncoderDecoder.pdf

Summary

Description
English: An example of an Encoder-decoder architecture, an encoder RNN is fed some input sequence of length $T_1$. The final hidden state of the encoder is used as input to the decoder RNN for every timestep in order to generate some sequence of length $T_2$. Both the encoder and decoder RNN could for example be LSTM units.
Date
Source Own work
Author Babayaga94

Licensing

I, the copyright holder of this work, hereby publish it under the following license:
w:en:Creative Commons
attribution share alike
This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
You are free:
  • to share – to copy, distribute and transmit the work
  • to remix – to adapt the work
Under the following conditions:
  • attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
  • share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
Category:CC-BY-SA-4.0#EncoderDecoder.pdf
Category:Self-published work Category:Machine learning Category:Deep learning Category:Machine translation Category:Recurrent networks
Category:CC-BY-SA-4.0 Category:Deep learning Category:Machine learning Category:Machine translation Category:Recurrent networks Category:Self-published work