Can I use PyOpenNMT to load/translate a model built with Torch OpenNMT?

I have a model that I’ve built using the Torch version of OpenNMT, but we’re trying to integrate it into a process written in Python. Is it possible to use the model that was built in Torch OpenNMT directly from PyOpenNMT? Or will I need to re-train the model using PyOpenNMT before I can translate from Python?

(I’m really just trying to use the OpenNMT model from Python. If I can’t do it directly, then perhaps it would be best to just call into Torch from Python? Would I need to use lutorpy for that?)

Hi Devin, you cannot convert a model built with OpenNMT lua in OpenNMT-py - we are thinking about some converters but today the lua version has more features and is a bit faster, so it would be rather the other way around. Why do you need to keep a python process - if it is for production, maybe having a wrapper around CTranslate would work for you? I would be surprised that you can use directly lutorpy - as a bridge between both - but we never tried, and we would be interested to know more if you experiment.

1 Like

Hi Jean, thanks for the guidance. It looks like CTranslate might work. However, it says, "It only supports OpenNMT models released with the [release_model.lua] (https://github.com/OpenNMT/OpenNMT/blob/master/tools/release_model.lua) script."
What does this script do?
Also, if I load it on a Windows machine, does it support GPU acceleration via cuBLAS?

What is the recommended practice for production use?
Is it to use CTranslate? Or something else?

See http://opennmt.net/OpenNMT/translation/inference/.

Maybe the code requires a few adjustments to run on Windows, but overall it should be pretty portable. Yes, it can use cuBLAS for matrix multiplication.

1 Like