I’ve searched this forum for incremental training with openNMT but I can’t get a clear solution. The scenario is the following:
I trained a model for about 30 epochs on the available training data, which gave me quite good results in the translation process.
After a while, I got a new bunch of training examples
Now I want to add this knowledge to the model avoiding to retrain the whole model again from the beginning. According to the documentation to continue with the training process I need the following command:
Should I just replace the -data option with my new training data? Or do I need to merge all the available data (old and new) and pass it to the model? With the first option, wouldn’t it be possible that the model forgets about previous knowledge? Should I use the -continue option?
By the way, I did BPE processing so I’m not expecting to have any problem with OOV words.
it is up to you to decide whether to use only your new training data or merge it with the old data.
If you just use your new data, you will adapt/specialize your translation model to the new data semantics.
If you use the merged data, you will adapt/specialize your model towards all your available data. In this case, be careful with the data proportions, if your new data is quite smaller than the old one, it could have small impact in the final model.
In any case, the model won’t forget its previous knowledge, it will adapt its knowledge to the new data you provide.
The -continue option continues the training where it left off, this is, if it was in epoch 20 it will restart the training at that point. I think you shouldn’t use this option for your incremental training.
You can find more information in thess other forum posts: