Translate with CTranslate using cuda

Hi, I reduced my model using python -m ctranslate2.bin.opennmt_tf_converter and now I am trying to translate my files with cuda option. Using the command:
echo “▁H ello ▁world !” | ./translate --model …/…/python/enes_ctranslate2/ --device cuda

I get the following error:
what(): unsupported device cuda
Aborted (core dumped)

Cuda is properly installed. Some ideas?

Hi,

Did you follow the instructions in the “Building” section? If yes, it only enables CPU execution as mentioned in the README:

Note: This minimal installation only enables CPU execution. For GPU support, see how the GPU Dockerfile is defined.

I recommend using the Docker images for GPU support: https://github.com/OpenNMT/CTranslate2#translating

1 Like

It seems that nvidia-docker ir deprecated for ubuntu 18.04. If am not wrong, docker includes cuda so the syntax should be:

echo “▁H ello ▁world !” | sudo docker run -i --rm -v $PWD:/data opennmt/ctranslate2:latest-ubuntu18-gpu --model /data/enes_ctranslate2 --device cuda

But with this command I get:
CUDA initialization failure with error 35. Please check your CUDA installation: http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

My nvidia-smi is the following:
±----------------------------------------------------------------------------+
| NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 106… Off | 00000000:65:00.0 On | N/A |
| 0% 49C P5 13W / 120W | 488MiB / 6075MiB | 0% Default |
±------------------------------±---------------------±---------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1182      G   /usr/lib/xorg/Xorg                           218MiB |
+-----------------------------------------------------------------------------+

I think you still need to pass the --gpus option?

https://github.com/NVIDIA/nvidia-docker#usage

Ok, I just solved it! Thx

To sum up:

Follow this page http://collabnix.com/introducing-new-docker-cli-api-support-for-nvidia-gpus-under-docker-engine-19-03-0-beta-release/

if you did all right, when running:
sudo docker run -it --rm --gpus all ubuntu nvidia-smi -L

should appear a list of all available gpus.

Later run:
time echo “▁H ello ▁world !” | sudo docker run -i –gpus all --rm -v $PWD:/data opennmt/ctranslate2:latest-ubuntu18-gpu --model /data/enes_ctranslate2 –device cuda

regards!

@guillaumekln Does the docker for OpenNMT-tf V2 also include Cuda 10.0 (or 10.1)?

Yes, it includes CUDA 10.0.

That’s a blessing :slight_smile:

Inspecting the nmtwizard/opennmt-tf docker image I see the engine is “18.06.1-ce” so there is no need to install docker 19.x?

There should be no need to reinstall anything.

On the reference you gave above it states: Make sure you have installed the [NVIDIA driver](and Docker 19.03 for your Linux distribution Note that you do not need to install the CUDA toolkit on the host, but the driver needs to be installed Are you saying that is not accurate and we can stick with docker 18.06.1-ce?

This is accurate if you are a new user following the latest instructions. Otherwise just use Docker as you did before.

This section mentions Docker versions older than 19.03 by the way: https://github.com/NVIDIA/nvidia-docker#upgrading-with-nvidia-docker2-deprecated

In fact the script:
CUDA_VISIBLE_DEVICES=0 nvidia-docker run -p 9099:9099 -v$PWD:/home/miguel nmtwizard/opennmt-tf:latest
–model ned2eng_0104 --model_storage /home/miguel/tf_experiments/ned2eng_tf/serving serve
–host 0.0.0.0 --port 9099
(now deprecated) actually still works with TensorFlow2 (assuming that’s in the above docker image pulled a few days ago.

I’m not sure anymore what we are talking about. :slight_smile: The image nmtwizard/opennmt-tf has not been updated to use OpenNMT-tf 2.0 and TensorFlow 2.0 yet.

Sorry for that - I thought it had been :slight_smile:

How do you guys using cuda device without using Docker image? My machine is already a Ubuntu machine with CUDA installed and everything, just feel it’s completely unnecessary to use Docker image.

Answered in another topic: