Preprocessing for running German-English BiLSTM pretrained model

I am trying to use the 2 layer BiLSTM pretrained model provided over here http://opennmt.net/Models-py/
As I understand the transformer model uses a sentence piece encoder so we need to preprocess our dataset before using the English- German model. However no such thing is mentioned for the BiLSTM.
My results are really bad when I directly use it, can someone please tell me where i am going wrong ?

Input Sentence: eine entzückende romantische Komödie mit viel Biss .
Output sentence : entzückende romantic Komödie with a lot of Biss .

command: onmt_translate -model iwslt-brnn2.s131_acc_62.71_ppl_7.74_e20.pt -src data/src-test.txt -output pred.txt -replace_unk

You should check the preprocessing that is applied in the IWSLT 14 script.