Using trained OpenNMT models for scoring translations

Hi, may I ask if it is possible to use trained OpenNMT just for scoring translation output (i.e., no beam search since decoded output is provided)? I’d like to use the model for re-ranking other translation output.



I think you are referring to this:

Thanks! Understand that we can pass the -tgt argument. In this case, is it possible to supply multiple target sentences for each source sentence, any specific format for this? For each source input, I have more than one hypotheses that I want to score.

You could just duplicate the source sentences in the test file.

Ok, got it. Thanks for the quick reply!

Hey @raymondhs where you able to assign that quality score to each translation?

I’m using the log-probability of the translation, which is indicated by GOLD SCORE in the translation log.