I’ve seen in the TensorFlow seq2seq tutorial that the encoding sequence of the first RNN is reversed (e.g., so that “Hi there END” becomes “END there hi”). I’ve also read that, in seq2seq models, the end of the input sequences naturally are given more weight.
Does OpenNMT have the same bias / already reverse the input? Or should I reverse the input sequence manually to make sure that the end of the sequence is given more weight?