CTranslate2 issues in onmt_server

I’m using Docker 19.03.12
and I ran docker with --gpus all option
Also I make translation model successfully using gpu (my torch version is 1.6.0+cu101)

When I start onmt_server by creating a ctranslator2 model, I face the following problems

(omit …)
ValueError: unsupported device cuda

and then I test pytorch cuda by
"import pytorch

torch.cuda.current_device()
0

torch.cuda.device(0)
<torch.cuda.device object at 0x7fc81c7caeb8>

torch.cuda.device_count()
8

torch.cuda.is_available()
True
"
in python3 interpreter

what should I check part…?

You should use a CTranslate2 Docker image as your base image. For example: opennmt/ctranslate2:1.13.2-ubuntu18-cuda10.1. See other availables versions on Docker Hub.

The ctranslate2 package installed in this image supports GPU. You can then install the other packages that you need.

The thing is, how do you enable CUDA support without using Docker image?

Recent versions of the package published to PyPI support GPU:

pip install --upgrade ctranslate2

Then you should be able to set a GPU device in your server configuration just like an OpenNMT-py model.

1 Like

Thanks alot!! I just reinstalled the package and now it can use CUDA!! Thank you!