Output is single word, repeated n times

I am trying to understand why result of training is model that just repeats the same word once it shows up in output. E.g. w1 w2 w3 w3 w3 w3 w3 w3 w3.

From what I see, there is nothing specific about the frequency of the word, but maybe I am missing something.
Do you have some experience with this type of problems? Any advice on how to catch it early, to avoid wasting time on training all the iterations?

Thanks.

From my experience, you should first focus on the input that produces such output. It is usually a type of sentence not found in the training data (out of domain, preprocessing mismatch, uncommon length, etc.).

1 Like

I forgot to say, interesting is that every input produces such output.

issue in the training corpus preparation is most likely the reason… it is generally hard to understand why some output is generated, but as mentioned by @guillaumekln, it is 99% of the time very easy to trace an issue back to some anomaly in the training corpus…
just to make sure there is nothing wrong with your set-up, you can train on some toy pretokenized corpus (10k sentences) with another language pair - if the same issue appears, then there is something wrong with your code.

1 Like