About the Support category
Access the "keys" of data that are in a batch
Vocabs in preprocess.py
Using OpenNMT for Information Retrieval
Perplexity nan at epoch 5
Paraphrase generation with OpenNMT
XML tags handling
Finding perplexity score for existing model
Stop to output log in training when set encoder_type to brnn
Error with replace_unk in train.lua
Shared vocabulary for summarization problem
Tok_(src|tgt)_case_feature + DynData training, translate <unk> rather than L or N
Out Of Memory with Dynamic Dataset, LuaJit, preprocess_nthreads > 1
When does the model transform string to int during training?
Why value of list(train_iter).src in train.py always changing?
Tok_src_case_feature : how to do with translation_server?
Can you please suggest me any paper/video which explain how OpenNMT approach work, it calculates accuracy and other parameters?
Problem in translate.py
The epoch can not be finished when training from checkpoint model twice
Input vector as input and branch encoder
Training takes up 100% RAM and Swap CPU Memory
Chinese to english translation many unknown words
Cannot change dynamically option -rnn_size. Ignoring
How to extend an already-trained engine?
RNN size with BLSTM/PDBRNN
How do I start neural machine translation using OpenNMT as a beginner, May I get the Videos?
How perplexity and accuracy being calculated? Where I can I get the explaination?
Word features support for python version
Training a language model
ZeroMQ JSON Unicode Error
next page →