OpenNMT Forum

No words predicted at inference

I don’t seem to be the only one having this problem but I have not been able to find an adequate solution yet.

I use a normal configuration file to train a model for 100 iterations and successfully save the model, I also get the validation perplexity and accuracy results as normal. However, when I use the translation pipeline, I get No words predicted. Any idea what could be the issue? I tried using -fast and testing different beam sizes but I still haven’t managed a solution.

An example output at translation:

‘’’
SENT 1511: [‘When’, ‘recording’, ‘captures,’, ‘the’, ‘captured’, ‘piece’, ‘is’, ‘named’, ‘rather’, ‘than’, ‘the’, ‘square’, ‘on’, ‘which’, ‘it’, ‘is’, ‘captured’, ‘(except’, ‘to’, ‘resolve’, ‘ambiguities).’]

PRED 1511:

PRED SCORE: -3.5077
‘’’

Output from training:

‘’’

[2021-03-10 10:40:02,196 INFO] Validation perplexity: 2112.7

[2021-03-10 10:40:02,196 INFO] Validation accuracy: 2.84034

[2021-03-10 10:40:02,217 INFO] Saving checkpoint test_model.en-ne_step_100.pt
‘’’

I use this command for training and the following for translating:

‘’’
onmt_train -config cofig-transformer.yml

onmt_translate -model test_model.en-ne_step_100.pt -src nepalidata/test.ne-en.en -output pred.txt -verbose -beam_size 4
‘’’

One thing, I am currently training on CPU so I am not sure if that could be a problem for translate?

[2021-03-10 10:40:02,196 INFO] Validation perplexity: 2112.7
[2021-03-10 10:40:02,196 INFO] Validation accuracy: 2.84034
[2021-03-10 10:40:02,217 INFO] Saving checkpoint test_model.en-ne_step_100.pt

100 steps is very (very) little training, perplexity is very high, accuracy is very low → your model hasn’t learned the task properly.

You may try to set a min_length to “force” the model to decode some words but results won’t be good.

You need to train your model more.