Research


About the Research category (1)
How are the word embeddings learned during training? (4)
Improvement of performance by data normalization (15)
Basic example OpenNMT and Moses PT<>ES CA<>ES BLEU score results (2)
Sentence Embeddings for English (3)
Reproducing "Neural Machine Translation from Simplified Translations" (3)
Replacing word embedding with lstm over character embedding for rare words in Embeddings.py (1)
About "A Deep Reinforced Model for Abstractive Summarization" (1)
Pruning useless weights (4)
Context-sensitive spell checking (2)
NMT's vocabulary (3)
Is there any efficient reference set(English) for bleu scoring? (3)
Word-level Distillation (2)
What is the use of Monolingual Corpora in SMT (3)
Using OpenNMT for Information Retrieval (3)
Training with out-of-domain data (1)
How is google's transformer translate between Chinese and English? (1)
Multiple translations open nmt (2)
Quality estimation / pred_score (4)
Paragraph vs sentence segmentation (3)
What is the best way to generate parallel corpus as per your past experience? (1)
In parallel corpus what should I keep, single sentence in each line or paragraph for MT? (3)
Metrics (Bleu, ppl, gold ppl, pred ....) (8)
About mask in attention layer and decoder (5)
How to create models that translate very quickly (8)
Exposure bias during training (4)
Improve BLEU by Coverage and Context Gate (7)
Use of Monolingual corpora in OPENNMT (8)
Using features for domain/client/subject adaptation (7)
Automatic post-edit training of a training (4)