Hello,
If I train model on multi GPUs can I use this model for translation on 1 GPU only?
Any issues with that?
Thanks!
Hello,
If I train model on multi GPUs can I use this model for translation on 1 GPU only?
Any issues with that?
Thanks!
Hi @szhitansky!
as far as I know, at least for the OpenNMT-lua , the training process is prepared to run
in several GPUs in parallel, if I am not mistaken, it consists roughly on creating a model clone in each GPU and average them after each batch iteration. But, at the end of the training you obtain a single instance of the translation model -in fact, you obtain a single model after each epoch-.
So, you should be able to use the model for translation with 1 GPU only.
In fact, if I am not mistaken, the translation process only uses 1 gpu, it is not prepare to handle translation in several GPUs in parallel, at least for the lua implementation.