OpenNMT Forum

Research


Topic Replies Activity
About the Research category 1 December 23, 2016
Transformers on low resource corpora 2 April 3, 2020
Multifeature translation question 19 March 30, 2020
How can I get some hidden states of model(such as self-attention matrix) during translation 3 March 24, 2020
Are you interested in training Russian-Abkhazian parallel corpus? 27 March 23, 2020
Are composite tokens possible? 1 March 21, 2020
Quality/Confidence score 1 March 11, 2020
Qs about NMT learning 8 March 6, 2020
Looking for a master's thesis topic 7 March 4, 2020
Multilingual training experiments 7 March 3, 2020
In a Transformer model, why does one sum positional encoding to the embedding rather than concatenate it? 3 February 17, 2020
CCMatrix: A billion-scale bitext data set for training translation models 3 February 12, 2020
Trainings steps Q 2 February 10, 2020
Types of generalizations learned by NMT? 1 February 10, 2020
Questions about translation length and generation diversity 3 January 3, 2020
What is the difference between multi-feature and multi-source? 3 December 24, 2019
Bleu score fall if I continue training after certain epoch 3 December 18, 2019
Difference between bpe and character-level tokenization for BLUE score 3 December 15, 2019
Should I use all available words for vectorization (word2vec/GloVe)? 7 December 13, 2019
Warmup configuration for fine tuning 4 December 11, 2019
Bleu score falling when detokenizing, detruecasing and de subword BPE 16 December 11, 2019
Compared patterns 8 December 8, 2019
Building a French-English dictionary 3 December 7, 2019
Explaining the concept 14 December 7, 2019
Translate file is generating pred file containing both the languages 3 December 6, 2019
NMT-limited/fixed vocab problem 19 December 5, 2019
Words are transformed into numbers. What happens next? 3 December 4, 2019
Words become numbers 3 December 4, 2019
How do you correct? 2 December 2, 2019
Generative Adversarial Network for NMT 4 December 1, 2019