How to run dual source Transformer?

Hi @guillaumekln ,
I have been using the custom DualSourceTransformer from the examples in the repository, but after updating the config.yaml with the necessary additions and vocal files for multiple sources, I’m having issues running this, mainly because I’m getting an error when I run:

CUDA_VISIBLE_DEVICES=0 onmt-main --model_type DualSourceTransformer --config config.yaml --auto_config train mainly because DualSourceTransformer isn’t a registered model. I tried to use Transformer but I get the error:
ValueError: Missing field 'source_vocabulary' in the data configuration

I wanted to understand how I could go about using the onmt-main command in this case, what would be the --model_type in this case?


See the documentation about “Custom models” which describes how to run external model files:

1 Like