Can I ask whether the environment matters for ctranslate2 model conversion?
In the benchmark, we use the Intel-MKL compiled ctranslate2 to convert opennmt-py file. However in README, it shows a conversion example by installing ctranslate2 via pip. If the conversion environment doesn’t have intel-MKL installed, will it affect the converted model performance?
Got this question from this comment in README
The core CTranslate2 implementation is framework agnostic. The framework specific logic is moved to a conversion step that serializes trained models into a simple binary format.