Research


About the Research category (1)
Low ressource languages, infinite training, and back-translation (4)
What is the relationship between ppl, accuracy and bleu scores in machine translation? In my experiment, although ppl and accuracy on the validationset are greatly improved, Bleu scores is reduced (1)
Questions on large batch size training with Transformer in low-resource scenario (3)
Is there a paper to cite if OpenNMT is used in sequence mapping tasks other than neural machine translation? (2)
Modifying the gradients during training (3)
Looking for a master's thesis topic (6)
Improvement of performance by data normalization ( 2 ) (23)
Using numerical data as input for Encoder (4)
High Rank RNN model as Encoder or Decoder (2)
Opts.py files for the published papers (3)
Beam Search Global Scorer (1)
Combining BPE and sequence tagging (1)
About "A Deep Reinforced Model for Abstractive Summarization" (3)
Output is single word, repeated n times (4)
Why is it hard for bidirectional to learn to output input (4)
How to estimate number of iterations? (2)
Understanding the number of parameters (1)
Pre-Trained English-Korean NMT? (1)
Does CNNEncoder use kernel of (width, 1)? (1)
How to set the gradient clipping value (1)
Influence sentence's length on translate result (1)
Automatic post-edit training of a training (6)
Will pretrained embeddings also be updated during training? (3)
Input vectors can only be used for the source (2)
Use of OpenNMT for recognizing textual entailment (1)
Using OpenNMT for Information Retrieval (4)
ONNX - a standard for neural network representation (1)
Fast CPU decoding (1)
Multilingual source to English (1)