I would like to try modifying the decoder by accepting partial translated inputs, and let the decoder ‘complete’ the translation using what’s provided. This should be helpful in Human-in-the-loop machine translations as well.
For example, for FR-EN,
If I have “Bonjour, je suis James.”, a FR-EN model might translate it as “Hello, I’m James.”
However, I might want a more casual translation, and have it “Hi, I’m James.” (Ignore the translation accuracy. Just an example for illustration)
For Transformers, it seems that phrase tables don’t really work well because even when trained with alignment, attention tends to span across multiple subwords or even words, so selecting the token with the maximum attention sometimes doesn’t do a good job—you can’t really just find the source terms (that’s in a phrase table), and replace the aligned target terms with the defined value.
I’m wondering if I could include the phrase table (Bonjour|||Hi) into the decoding process.
So the model will see as src,
Bonjour. Bonjour, je suis James
and in the decoding process, we can insert
Hi. right after the
<bos> keyword and let the decoder carry out the decoding. Assuming it sees Hi, the attention on
Hi might be ‘enough’ to nudge the translation of Bonjour to Hi instead of Hello. Of course, this is just a simple example.
It’s an attempt to have the translation be more consistent in document-based translations. Instead of translating a particular term to the various synonyms or variations of a name (Madalene, Madeline, etc) because of a lack of fine-tuning (a phrase table might only contain a few lines), this should help boost consistency.
Any ideas on how I could go about doing this?