Guided translation adjustment based on custom model

Hello,

I’ve been facing the challenge of combining existing “external” models such as DeepL with my custom model trained (with low resource). The goal is mainly to fix in domain words and punctuation.

So I gave it a go this weekend and did a POC (Proof of concept) and so far the results are far beyond what I originally though.

This technic that I’m about to elaborate is based on this philosophy:
In the context of Evaluating a model with Ctranslate2, if the predict score is extremely low, it can be either because the model was not expose enough to understand well how to translate that specific token, or because the model knows really well how to and there is a mistake in the translation.

So I decided to identify those really low predict scores and then validate with a full translation from the model if at that same spot the model was certain. This enables me to categorize if a really low predict score was a mistake (or out of domain word) or just a lack of exposure from my custom trained model.

Here what I did:

  1. Translate Source in DeepL
  2. Evaluate the results with my custom model (Ctranslate2)
  3. Combine tokens so that you have a list of full words and keep the worse token predict score for each one of them. (ex: if you have the tokens [“Phy”, “lo”, “so”, “phy”] with respectively these predict scores: [-0.01, -0.02, -5.2, -0.02] would become: [“Phylosophy”] with [-5.2]
  4. You find the position of the first worse word in your string
  5. Use Ctranslate2 to translate with your custom model while providing the beginning of the sentence already translated (up to the position of the worse token) So we are asking Ctranslate2 to complete the sentence. (add an extra step here to call evaluate to get the predict score of the tokens)
  6. do some logic to compare the first word with a bad predict score from original translation vs custom training.
  7. Fix the original translation tokens by swapping from the custom train when the predict score of the custom train is really low.
  8. From here you need to iterate steps 4 to 7, by going through the words with a predict score lower than the threshold you identified as “bad”.

So you have 2 parameters you can adjust.

  • threshold for tokens that you consider potential issues (used with the evaluation of the original translation)
  • threshold for tokens you consider excellent (used with your custom model)

At this time I only made logic for a single word swap, but I’m going to experiment to support multi words.

Here is a quick screenshot of few sentences that went into that process:

In this screenshot all the differences were a perfect fix.

CON of this technic:
The compute time. Calling Deepl takes 3 sec, all that extra work takes an additional 15 secs. This can be trim down with some parallel coding & also right now I need to translate then call evaluate to get the token predict scores which I could potentially get from the translate…

PRO of this technic:

  • this mean that you can in domainize any text from any provider without having to do additional training or complicated data augmentation with external data for your model.
  • You can potentially easily identify spots where the models is not so sure and provide suggestions just for those spots.

If anyone has additionnals questions/suggestions for me, let me know. I will post later on the results it has based on BLEU, WER, hLPOR and METEOR score. (probably in 2 weeks)

Best regards,
Samuel

1 Like

Dear Samuel,

That is a very interesting approach! Thanks for sharing!

You might be interested in this paper. The authors proposed a twofold-approach: identifying difficult words (with high prediction loss) and sampling with the objective of increasing occurrences of these words, and identifying contexts where these words are difficult to predict and sample sentences similar to the difficult contexts.

1 Like

As you said, the process is not very efficient. However, it might improve the quality of back-translation; so you can improve new systems by adding high-quality synthetic data.

Thank you Yasmin,

Personally, I will be using this to refine all the external translation tools I’m using. I will also leverage this to identify and prompt suggestion to translator for potential typos/mistake or words that are not coherent with our previous translations.

I believe that with just few more lines of code I could also provide suggestions at critical point (like DeepL is providing with their UI (User Interface).

Best regards,
Samuel

1 Like