Larger scale monolingual resource for target language

Hello again :slight_smile:

I am not sure if this is feasible or not so feel free to close this thread if not relevant.

I am wondering if instead of doing a n-best list based rescoring, it would be possible to interpolate the output scores / weights with another rnnlm stack trained on a much larger monolingual dataset, hence improving the likelihoods at output.

similar stuff has been done in smt systems of course.


Yes. This can be done and will be easier to do with the new beam search code. Let’s revisit once that is landed.

1 Like