OpenNMT Forum

Research


Topic Replies Activity
About the Research category 1 December 23, 2016
Model "jamming" on words 3 May 18, 2020
Sentence meaning score 2 May 18, 2020
Curriculum Learning in the Age of Transformers - Parts I-II 4 May 14, 2020
Effectiveness of Transformer model on small dataset 8 May 12, 2020
Doubt with Number of Epoch 2 May 11, 2020
Transformers on low resource corpora 6 May 4, 2020
Bad data in WMT14 en-de 2 April 23, 2020
Are composite tokens possible? 2 April 17, 2020
Which BLEU script to use? 2 April 17, 2020
What's the best en-de WMT14 BLEU in onmt? 2 April 9, 2020
Multifeature translation question 19 March 30, 2020
How can I get some hidden states of model(such as self-attention matrix) during translation 3 March 24, 2020
Are you interested in training Russian-Abkhazian parallel corpus? 27 March 23, 2020
Quality/Confidence score 1 March 11, 2020
Qs about NMT learning 8 March 6, 2020
Looking for a master's thesis topic 7 March 4, 2020
Multilingual training experiments 7 March 3, 2020
In a Transformer model, why does one sum positional encoding to the embedding rather than concatenate it? 3 February 17, 2020
CCMatrix: A billion-scale bitext data set for training translation models 3 February 12, 2020
Trainings steps Q 2 February 10, 2020
Types of generalizations learned by NMT? 1 February 10, 2020
Questions about translation length and generation diversity 3 January 3, 2020
What is the difference between multi-feature and multi-source? 3 December 24, 2019
Bleu score fall if I continue training after certain epoch 3 December 18, 2019
Difference between bpe and character-level tokenization for BLUE score 3 December 15, 2019
Should I use all available words for vectorization (word2vec/GloVe)? 7 December 13, 2019
Warmup configuration for fine tuning 4 December 11, 2019
Bleu score falling when detokenizing, detruecasing and de subword BPE 16 December 11, 2019
Compared patterns 8 December 8, 2019