Hi, I reduced my model using python -m ctranslate2.bin.opennmt_tf_converter and now I am trying to translate my files with cuda option. Using the command: echo “▁H ello ▁world !” | ./translate --model …/…/python/enes_ctranslate2/ --device cuda
I get the following error: what(): unsupported device cuda Aborted (core dumped)
if you did all right, when running:
sudo docker run -it --rm --gpus all ubuntu nvidia-smi -L
should appear a list of all available gpus.
Later run:
time echo “▁H ello ▁world !” | sudo docker run -i –gpus all --rm -v $PWD:/data opennmt/ctranslate2:latest-ubuntu18-gpu --model /data/enes_ctranslate2 –device cuda
On the reference you gave above it states: Make sure you have installed the [NVIDIA driver](and Docker 19.03 for your Linux distributionNote that you do not need to install the CUDA toolkit on the host, but the driver needs to be installed Are you saying that is not accurate and we can stick with docker 18.06.1-ce?
In fact the script:
CUDA_VISIBLE_DEVICES=0 nvidia-docker run -p 9099:9099 -v$PWD:/home/miguel nmtwizard/opennmt-tf:latest
–model ned2eng_0104 --model_storage /home/miguel/tf_experiments/ned2eng_tf/serving serve
–host 0.0.0.0 --port 9099
(now deprecated) actually still works with TensorFlow2 (assuming that’s in the above docker image pulled a few days ago.
How do you guys using cuda device without using Docker image? My machine is already a Ubuntu machine with CUDA installed and everything, just feel it’s completely unnecessary to use Docker image.