Translate file is generating pred file containing both the languages

Greetings,
I am training a data to generate seq2seq models which can translate English words to German. the problem that I am facing at present is that it is generating the prediction file which is containing both English and German text .
the source(English) and target(German) files are well preprocessed.
Can any one help me in this.

Is it normal if i get mixed text prediction?

The REST server used in the Lua Torch version of OpenNMT (if that is what you are using) will give you a JSON containing both source & target. You will need to use a client that can parse the JSON and extract the target.

1 Like

Thank you sir for the help, but I found that the translate file was generating the mix of the languages because the default vocab size is very small, so i altered the size of my vocab and it performed well, Thanks for your help anyways . :slight_smile: