About the Research category
Quality estimation / pred_score
Paragraph vs sentence segmentation
What is the best way to generate parallel corpus as per your past experience?
In parallel corpus what should I keep, single sentence in each line or paragraph for MT?
Metrics (Bleu, ppl, gold ppl, pred ....)
About mask in attention layer and decoder
How to create models that translate very quickly
Exposure bias during training
Improve BLEU by Coverage and Context Gate
Use of Monolingual corpora in OPENNMT
Using features for domain/client/subject adaptation
Automatic post-edit training of a training
Has anyone experimented with the dense bridge?
Did someone test RNN with larger recurrence?
Training chatbot with multiple inputs (to add context)
Automatic training corpus filtering
Implementing hashing to massively improve performance (by 95%)
Speech-to-text using Convolutional LSTM layers
Sentence length affect perplexity decrease
Simple combination between SMT and NMT
Alternative methods for <UNK> substitution
Importance Sampling - training speed
Weird Output after many attempts to train
Noise Contrastive Estimation for Machine Translation
Early stopping : a fake solution?
Use for beam search in NMT
next page →