Can i use multiple gpu for translation server? if yes, how to do that?
If you use OpenNMT-py
Please add below option
Deprecated see world_size and gpu_ranks.
list of ranks of each process.
total number of distributed processes.
If you use OpenNMT-tf please add below option
is it applicable for translation as well? I know that while training I can employ multiple GPUs but what I am asking here is for translation only??
MultiGPU can be applied to all tasks .
i tried adding it to conf.json for my translation server like this but it was accepting only one gpu argument not an array.
can you tell what am i missing?
That’s not correct. Only training can make use of multiple GPUs.
For a translation server, the recommended approach is to start a server on each GPU and have an external system that does load balancing. This external system is not part of OpenNMT.
Okay Thanks for you apply