OpenNMT Forum

Domain Adaptation

How can I use opennmt v2 for the domain adaptation? I failed to get any documentation or tutorial regarding that.

Dear Ayush,

For fine-tuning, you do not run the first step of building vocabulary; instead, you run the training directly from the last saved mode; i.e. you are using the same vocabulary files. For this, it is better to copy the training *.yml configuration file and change two things:

  1. Training datasets:
        path_src: continue.subword.en
        path_tgt: continue.subword.hi
        path_src: continue-dev.subword.en
        path_tgt: continue-dev.subword.hi
  1. where you save the new model; otherwise, it will overwrite your original model:
save_model: model/model-continue.en-hi

If your fine-tuning data is very small, consider also changing the following arguments. This will give you a quicker feedback on the improvement of fine-tuning, and save this validated step as a checkpoint:

valid_steps: 1000  # original 10000
save_checkpoint_steps: 1000  # original 10000

Only if you have already trained on all the training steps (e.g. 100000), you need to raise the number for fine-tuning (e.g. 120000). Otherwise, if your original training stopped earlier with Early Stopping, no need to change this:

train_steps: 120000  #for example 

If you want to use Early Stopping, define it as follows:

early_stopping: 4 # this stops the training if the validation score does not improve after 4 validations (e.g. after 4 * 10000 = 40000 steps)

That is it for your configuration file.

The command line (you run in Terminal) for fine-tuning adds only one parameter, which is train_from:

onmt_train -config config-transformer-base-2GPU.yml -train_from model/

After train_from, you add your best model, and this is either determined by Early Stopping (it tells you after it stopped) or by your own BLEU score on each saved model.

For more information on Domain Adaptation techniques, check my article (link) and my AMTA 2020 presentation (link).

I hope this helps.

Kind regards,