Seq2seq

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Seq2seq is a family of machine learning approaches used for language processing.[1] Applications include language translation, image captioning, conversational models and text summarization.[2] Its basic innovation was to expand the analytic process

History[edit]

The algorithm was developed by Google for use in machine translation.[2]

In 2019, Facebook announced its use in solving differential equations. The company claimed that it could solve complex equations more rapidly and with greater accuracy than commercial solutions such as Mathematica, MATLAB and Maple. First, the equation is parsed into a tree structure to avoid notational idiosyncracies. An LSTM neural network then applies its standard pattern recognition facilities to process the tree.[1]

Technique[edit]

Seq2seq turns one sequence into another sequence. It does so by use of a recurrent neural network (RNN) or more often LSTM or GRU to avoid the problem of vanishing gradient. The context for each item is the output from the previous step. The primary components are one encoder and one decoder network. The encoder turns each item into a corresponding hidden vector containing the item and its context. The decoder reverses the process, turning the vector into an output item, using the previous output as the output context.[2]

Optimizations include:[2]

  • Attention: The input to the decoder is a single vector which stores the entire context. Attention allows the decoder to look at the input sequence selectively.
  • Beam Search: Instead of picking the single output (word) as the output, multiple highly probable choices are retained, structured as a tree (using a Softmax on the set of attention scores[3]). Average the encoder states weighted by the attention distribution.[3]
  • Bucketing: Variable-length sequences are possible because of padding with 0s, which may be done to both input and output. However, if the sequence length is 100 and the input is just 3 items long it, expensive space is wasted. Buckets can be of varying sizes and specify both input and output lengths.

Training typically uses a cross-entropy loss function, whereby one output is penalized to the extent that the probability of the succeeding output is less than 1.[3]

Related software[edit]

Software adopting similar approaches includes OpenNMT (Torch), Neural Monkey (Tensorflow) and NEMATUS (Theano).[4]

See also[edit]

References[edit]

  1. ^ a b "Facebook has a neural network that can do advanced math". MIT Technology Review. December 17, 2019. Retrieved 2019-12-17.
  2. ^ a b c d Wadhwa, Mani (2018-12-05). "seq2seq model in Machine Learning". GeeksforGeeks. Retrieved 2019-12-17.
  3. ^ a b c Hewitt, John; Kriz, Reno (2018). "Sequence 2 sequence Models" (PDF). Stanford University.
  4. ^ "Overview - seq2seq". google.github.io. Retrieved 2019-12-17.

External links[edit]