Does opennmt-py support Domain adaptation?

How to train a DA-model with opennmt-py?

Hi @PengboLiu !

as far as I know, OpenNMT-py supports DA in the same way as OpenNMT-lua does.

You should train a translation model using a generic training data set and , afterwards, you should specialize this model on the in-domain data by using the -train_from option
I think it should be as straightforward as launching the following command:

python -data data/indomain -save_model model-specialized -train_from generic_model

Remember that you must first preprocess the data for both trainings and, also, you have to take into account that the DA model will have the same architecture than the one you use as starting point: encoders/decoders with the same number of layers and hidden units, same embeddings and so, same vocabularies, for instance.

You can find more information about how to use OpenNMT-py in the OpenNMT-py documentation

Good luck!

Sorry,still some questions
In preprocess step, we get and Should I delete still use with

In the preprocess step you should run something like:

python -train_src data/train_domain.src -train_tgt data/train_domain.tgt -valid_src data/valid_domain.src -valid_tgt data/valid_domain.tgt -save_data data/domain_using_generic_vocab -src_vocab data/generic_vocab.src.t7 -tgt_vocab data/generic_vocab.tgt.t7

This will generate a new file: data/domain_using_generic_vocab-train.t7 (or something similar) that will contain your in-domain data preprocessed and prepared taking into account the vocabulary of your generic translation model, and ready to train your adapted model.


sorry I only have a vocab file:
I don’t have generic_vocab.src.t7 and generic_vocab.tgt.t7
what should I do?

Hi Patrick,

After a quick look at the, it seems like this functionality is not implemented for OpenNMT-py as it was for OpenNMT-lua.

There is a pull-request that gives a solution for your issue: