Using pretrained En-De model?

I am using the pretrained En-De model with the following command,

python -model …/opennmt-pretrained/ -src data/train-context.txt -output data-translated/train-context-de.txt -replace_unk -verbose

However, the generated results look very weird. I am wondering if I am using the wrong command or missed some parameter to generate the translations. Any advice?

Did you preprocess the data using the SentencePiece model included in the model archive? This has been discussed many times on the forum.