Guys,
I’m running OpenNMT on the AWS P2 machines. These have a single GPU (with four cores),
When training a model only one of the cores is being used, any one figure out how to use all four cores to speed the training up?
Tony
Guys,
I’m running OpenNMT on the AWS P2 machines. These have a single GPU (with four cores),
When training a model only one of the cores is being used, any one figure out how to use all four cores to speed the training up?
Tony
Hello,
Do you mean 4 CPU cores? If the GPU is fully used (i.e. close to 100% utilization in nvidia-smi
), there will be very little gains in using additional CPU cores.
That explains it then. Thanks for the update.
I am also trying to train on an EC2 instance (g2.2xlarge).
@guillaumekln: Could you clarify what you mean by nvidia-smi? Is there a special configuration required when using an EC2 instance with GPU support?
@tonnyod: How long did the standard training take for you on the P2 instance?
Thanks!
Cheers,
Martin
nvidia-smi
is a command that shows you the GPU utilization. Regarding OpenNMT, running on EC2 is no different than running on a local server.
Thank you for the clarification, Guillaume.