Multi-gpu PyTorch version?

pytorch

(Wabbit) #1

@srush do you have plans to contribute a multi-gpu version of OpenNMT on PyTorch? I believe it’s currently single GPU.

Also is this the right place to ask for this feature request or is it the PyTorch forum?


Does the PyTorch version support residual connection?
(srush) #2

Apparently pytorch is developing framework-level support for multi-gpu in the near term, so I believe we are waiting for that.


#3

Any updates on this?


(Guillaume Klein) #4

It is actually implemented. You can pass multiple GPU identifiers to the -gpus option. However, it was not heavily tested both in correctness and efficiency.


#5

Thanks very much Guillaumemin. Actually the github page says these are not implemented yet.
word features
multi-GPU
residual connections
Is there any update on wordfeatures and residual connections. For my application, “residual connections” is more important as I am experimenting with more than 4 encoder/decoder layers.


(Guillaume Klein) #6

@srush is going to add word features soon.

Residual connections may require more work as we directly use the RNN units from PyTorch. What prevents you from using the Lua version?