I’d like to know how to specify in OpenNMT-py the use of an specific GPU. I’ve done this:
set CUDA_VISIBLE_DEVICES=1 because I want to use GPU with id 1.
then I ran this command: python train.py -data sonar_data/sonar-prepro --train_from sonar-model_step_30000.pt -save_model sonar-model1 -save_checkpoint_steps 3000 -gpu_ranks 1
however it always goes to the GPU 0 which is already busy with other task and leave GPU 1 free.
the CUDA_VISIBLE_DEVICES variable sets the gpus that are visible for your process.
If you set CUDA_VISIBLE_DEVICES=1, your process will understand that the only gpu available is the gpu 1. You have to “publish” the rest of gpus in that variable to let the process use another gpu.
For instance, if you have 4 gpus, you can make available all gpus by doing:
CUDA_VISIBLE_DEVICES=0,1,2,3
and then, using gpu_ranks 3 you will let the process use the gpu with id 2, or using gpu_ranks 4 it will use gpu with id 3.
If you have 2 gpus, you can set CUDA_VISIBLE_DEVICES=0,1 and then use gpu_ranks 1 in order to use gpu with id 0 or gpu_ranks 0 in order to use gpu with id 1.
If you only want to make available your free gpu, the free one, you can set CUDA_VISIBLE_DEVICES=0 and then use -gpu_ranks 1, in this case, you are choosing the first gpu available for the process, which is the one with id 0.
A piece of advice with gpu’s ids, sometimes the id used by CUDA is not the same as the gpu proper id, but playing with the values in CUDA_VISIBLE_DEVICES and gpu_rank you will be able for sure to select the gpu you want to run the process.