How to extend an already-trained engine?

Hello ONMT team,

Say, I have trained an engine. How can I extend this existing engine with more parallel text that I got or collected through subsequent translation cycles or even post-editing? Is there a way for a real-time extension of an existing engine? Or do we need to train from scratch? Would the ‘-continue’ training feature be of good use in this case?


Hello @mzeid,

you would use -train_from to start from an existing model - some advices:

  • keep a model that has not completely finished the training cycle (epoch ~10) and iterate for 2-3 epochs
  • do not feed pure new data - mix the new data with some of the corpus you had used to train the initial engine or you will risk getting your model forget his initial training

and there is nothing better than experimenting multiple configurations to get your own recipe :slight_smile:

Keep us updated,


Thank you so much for your valuable advice, Jean! It makes total sense. I appreciate it.


You can also have a look at this paper


In the above paper, it is mentioned that you simply continued training with more in-domain data, something that I also plan to do. Did you also follow this approach in the paper (that is, step back a few epochs and mix new data with existing) or you just continued training from the last epoch with only new in-domain data? I just skimmed through the paper --will read it later more carefully-- so apologies if the answer is already there :slight_smile:



1 Like

In the paper - we explore multiple approaches, but were mostly focusing on continuing with only in-domain data, since then we observe quite systematically better result by mixing progressively in-domain and generic