Support


About the Support category (1)
DataLossError : Checksum does not match (9)
Early stopping parameters for default transformer config? (3)
Large perplexity (7)
Translating huge number of sentences (3)
Restoring of source formatting (14)
OpenNMT-tf procedure for fine-tuning/domain adaptation? (2)
Learning rate not decaying when perplexity stops decreasing on validation set (4)
Question about using the Pretrained embedding (4)
"Import pyonmttok" failed & "pip install pyonmttok" faileld - Windows 10 (2)
Receiving a "ValueError: best_eval_result cannot be empty or no loss is found in it." while training the transformer model (3)
OpenNMT-tf Fine tuning base model gives worse and decreases BLEU scores (9)
Handle numbers, urls, dates (3)
TransformerBig model - GTX 1080 Ti (11G) - ResourceExhaustedError (9)
Use GloVe with concatenated word features (OpenNMT-py) (1)
Tokenization OpenNMT-tf (4)
Using multiple gpus in training.py (5)
Translation server in OpenNMT-tf (3)
Perplexity in opennmt-tf (2)
Alignment not coming properly (2)
OpenNMT-tf Distributed Training - Processes don't end after training completes (5)
OpenNMT-tf serving with latest prediction_service_pb2_grpc for translation (3)
Hi , I have 130k train src tgt pairs, but the log info of training process shows that there are only 22k train pairs after we preprocess the data (5)
OpenNMT-tf toy-ende model scoring and inference clarification (3)
Issues when running the English-German WMT15 training ( 2 ) (23)
Issue in using distributed training in openNMT-TF ( 2 ) (32)
Beam Search with target words features: how does it work? (4)
Generating learning curves using OpenNMT-py version (3)
Opnenmt-th takes so long to load training data before it really starts the training process (3)
Learning rate reduced to 0 after start_decay_steps values (4)