What Cuda version should I use when launching on host using python API?

Hi! I’m running ctranslate2 on GPU using python API on host (not in Docker). Right now I always install the same version of cuda that the latest ctranslate2 docker image has (11.2 for now). Is it the best practice? Or is it better to install the latest cuda 11.5? Thanks.

Hi,

The Python package should work with any CUDA 11.x versions since CUDA libraries are dynamically loaded.

However, it is currently built using CUDA 11.2 so it’s a good practice to install this version on the host.

Thank you!