Using multi-cores on a GPU

pytorch

(Tony O'Dowd) #1

Guys,

I’m running OpenNMT on the AWS P2 machines. These have a single GPU (with four cores),

When training a model only one of the cores is being used, any one figure out how to use all four cores to speed the training up?

Tony


(Guillaume Klein) #2

Hello,

Do you mean 4 CPU cores? If the GPU is fully used (i.e. close to 100% utilization in nvidia-smi), there will be very little gains in using additional CPU cores.


(Tony O'Dowd) #3

That explains it then. Thanks for the update.


(Martin Wunderlich) #4

I am also trying to train on an EC2 instance (g2.2xlarge).
@guillaumekln: Could you clarify what you mean by nvidia-smi? Is there a special configuration required when using an EC2 instance with GPU support?
@tonnyod: How long did the standard training take for you on the P2 instance?

Thanks!

Cheers,

Martin


(Guillaume Klein) #5

nvidia-smi is a command that shows you the GPU utilization. Regarding OpenNMT, running on EC2 is no different than running on a local server.


(Martin Wunderlich) #6

Thank you for the clarification, Guillaume.