About the Development category (1)
Could I submit pull request to tokenization hook for Korean and Japanese? (4)
Multiple tokens in Source to single token in Target (7)
Language Model scorer and sampler (17)
Why exclude last target from inputs? (2)
OpenNMT-py Using Multiple Encoders (2)
Approach to translate one sentence at a time by write one sentence into temp file (3)
HDFS support in OpenNMT-tf (2)
How to exclude number and URL from vocabulary in translation? (4)
Windows + CUDA working with PyTorch! (2)
Simple OpenNMT-py REST server (1)
Choosing number of epochs for a stacked encoder decoder model (5)
How computing loss in shards helps reduce memory cost? (1)
SentencePiece vs. BPE (1)
Custom Loss Function Criterion (5)
New `hook` mechanism (14)
How to ensemble some models by OpenNMT(pytorch)? (2)
How should I choose parameters? (2)
Corpus level TER averaging (3)
How ensemble decoding (4)
OpenNMT : lua code debug (2)
Need help understanding copy_attn_force (3)
BPE options handling in learn_bpe.lua and tokenizer.lua (4)
OpenNMT tagger (CUDA -> CPU) release model (3)
Changing the behaviour of `end_epoch` options when used in combination with `train_from` and `continue` options (5)
Word features with idx_files (2)
Attention only models (2)
[Code Understanding] Where are different models 'used' in the source? (6)
How to improve the accuracy of model? (1)
Can dispatching batches with different src_len degrade performance in synchronous training (5)