OpenNMT Forum


Using OpenNMT for Information Retrieval (4)
ONNX - a standard for neural network representation (1)
Fast CPU decoding (1)
Multilingual source to English (1)
How to use BPE along with copying mechanism? (1)
How are the word embeddings learned during training? (4)
Basic example OpenNMT and Moses PT<>ES CA<>ES BLEU score results (2)
Sentence Embeddings for English (3)
Reproducing "Neural Machine Translation from Simplified Translations" (3)
Replacing word embedding with lstm over character embedding for rare words in (1)
Pruning useless weights (4)
Context-sensitive spell checking (2)
NMT's vocabulary (3)
Is there any efficient reference set(English) for bleu scoring? (3)
Word-level Distillation (2)
What is the use of Monolingual Corpora in SMT (3)
Training with out-of-domain data (1)
How is google's transformer translate between Chinese and English? (1)
Multiple translations open nmt (2)
Quality estimation / pred_score (4)
Paragraph vs sentence segmentation (3)
What is the best way to generate parallel corpus as per your past experience? (1)
In parallel corpus what should I keep, single sentence in each line or paragraph for MT? (3)
Metrics (Bleu, ppl, gold ppl, pred ....) (8)
About mask in attention layer and decoder (5)
How to create models that translate very quickly (8)
Exposure bias during training (4)
Improve BLEU by Coverage and Context Gate (7)
Use of Monolingual corpora in OPENNMT (8)
Using features for domain/client/subject adaptation (7)