Question about the environment for ct2 conversion

Can I ask whether the environment matters for ctranslate2 model conversion?

In the benchmark, we use the Intel-MKL compiled ctranslate2 to convert opennmt-py file. However in README, it shows a conversion example by installing ctranslate2 via pip. If the conversion environment doesn’t have intel-MKL installed, will it affect the converted model performance?

Got this question from this comment in README

The core CTranslate2 implementation is framework agnostic. The framework specific logic is moved to a conversion step that serializes trained models into a simple binary format.

No, it does not affect model performance because Intel MKL is not used during model conversion. Even if it was, note that MKL is included in the Python package installed via pip.

1 Like