Domain Adaptation

How can I use opennmt v2 for the domain adaptation? I failed to get any documentation or tutorial regarding that.

Dear Ayush,

For fine-tuning, you do not run the first step of building vocabulary; instead, you run the training directly from the last saved mode; i.e. you are using the same vocabulary files. For this, it is better to copy the training *.yml configuration file and change two things:

  1. Training datasets:
data:
    corpus_1:
        path_src: continue.subword.en
        path_tgt: continue.subword.hi
    valid:
        path_src: continue-dev.subword.en
        path_tgt: continue-dev.subword.hi
  1. where you save the new model; otherwise, it will overwrite your original model:
save_model: model/model-continue.en-hi

If your fine-tuning data is very small, consider also changing the following arguments. This will give you a quicker feedback on the improvement of fine-tuning, and save this validated step as a checkpoint:

valid_steps: 1000  # original 10000
save_checkpoint_steps: 1000  # original 10000

Only if you have already trained on all the training steps (e.g. 100000), you need to raise the number for fine-tuning (e.g. 120000). Otherwise, if your original training stopped earlier with Early Stopping, no need to change this:

train_steps: 120000  #for example 

If you want to use Early Stopping, define it as follows:

early_stopping: 4 # this stops the training if the validation score does not improve after 4 validations (e.g. after 4 * 10000 = 40000 steps)

That is it for your configuration file.

The command line (you run in Terminal) for fine-tuning adds only one parameter, which is train_from:

onmt_train -config config-transformer-base-2GPU.yml -train_from model/model_step_n.pt

After train_from, you add your best model, and this is either determined by Early Stopping (it tells you after it stopped) or by your own BLEU score on each saved model.

For more information on Domain Adaptation techniques, check my article (link) and my AMTA 2020 presentation (link).

I hope this helps.

Kind regards,
Yasmin

1 Like

Hi Yasmin,

Can early stopping still be used if the model training is stopped by manually terminating the training? By this I mean I am currently training a model that is configured to run for 200k steps. I am at 60k steps and the accuracy and perplexity scores on the validation set appear to have peaked and are now in a downward trend (while accuracy and perplexity on the training set continues to rise). What I want to do is terminate the training and then use one of the prior model checkpoint files to being domain tuning on a new dataset.

Any advice would be most helpful.

Kind regards,
Ken

Hi Ken!

Early Stopping should be originally defined in the config file. If you did not, then simply use Ctrl+C, and the training will stop.

As you said, you will try multiple checkpoints and find the best BLEU on the test set. Consider also trying averaging, explained in other forum threads.

Kind regards,
Yasmin