Strange results from the quickstart model

Hello,
I just trained my model using the Quickstart guide and https://s3.amazonaws.com/opennmt-trainingdata/toy-ende.tar.gz sentences.

When I try to translate the test.txt with this command:
onmt_translate -model toy-ende/run/model_step_1000.pt -src toy-ende/test.txt -output toy-ende/test_output_1000.txt -verbose

test.txt:
“Parliament Does Not Support Amendment Freeing Tymoshenko”

I got:
[2021-03-28 12:53:11,882 INFO] Translating shard 0.
[2021-03-28 12:53:12,212 INFO]
SENT 1: [‘Parliament’, ‘Does’, ‘Not’, ‘Support’, ‘Amendment’, ‘Freeing’, ‘Tymoshenko’]
PRED 1: Das ist es nicht auf .
PRED SCORE: -14.4420

Is this normal ?

Thanks

As stated in the note at the end of the quickstart:

The predictions are going to be quite terrible, as the demo dataset is small. Try running on some larger datasets! For example you can download millions of parallel sentences for translation or summarization.

Thanks, I had to make sure I didn’t do something wrong :slight_smile: