SequencetoSequence
Sequencetosequence, commonly abbreviated seq2seq, denotes a family of models and learning frameworks designed to map an input sequence to an output sequence. The input and output sequences can differ in length, and the approach is used across natural language processing, speech, and other sequential data tasks.
A typical sequencetosequence architecture consists of an encoder and a decoder. The encoder reads the input
Training is performed end-to-end by maximizing the probability of the target sequence given the input, usually
Sequencetosequence models underpin many applications, including machine translation, text summarization, speech recognition, question answering, and image