Is it possible with Opennmt to generate a model that ingest the same text translated in 3-4 languages and then translate it to a specific one?
The way I see it, is that the other languages would be optional, but bring additional information as to how to translate the text. I believe this would help a lot for context and feminine/masculine /plurals.
I saw a presentation from Omniscient technology where they say low resources languages get benefits from training a model that translate to many languages at the same time. Which is really interesting, but once again I’m not sure if we can do that with opennmt?
If your interested in the video, have a look at this link and start watching at 41min 30 sec. omniscien technology
One of the presenters at the Omniscien webinar told me afterwards in a private communication that they had trained an “into-Tagalog” model using English source data and Indonesian/Spanish/Tagalog target data. He provided no further evidence and I have not followed this up.