Simple Web Interface

@ymoslem I was wondering if CTranslate2 currently supports LSTM Models, and if not, how do you think I should go about integrating the LSTM weights into this streamlit tool?

I was currently able to integrate the transformer model into this streamlit server by converting my transformer “model.pt” file into “model.bin” using the CTranslate2 tool. Could you suggest something on how I could use the LSTM model for predicting translation for one line input since I don’t want to write to file every time and translate using onmt_translate, is there an alternative to this? @guillaumekln ?

Dear Ahan,

I do not think so. If you try to use CTranslate2 converter on a LSTM model, you will get this error:

- Options --encoder_type and --decoder_type must be 'transformer'

I am not sure, but I think OpenNMT-py REST Server supports all types of models. I used to use it with this web interface, or you can even go simpler using something like this example.

Kind regards,
Yasmin

2 Likes

Thank you this worked! Hosted a server for getting results for LSTM models and using CTranslate2 for Transformer model. This was very helpful!

1 Like

Hi Yasmin, Great Work

1 Like

Is there a way to make the server hot reload when the config file is changed (a new model is added)?

Hi James!

What do you mean by “config” file? If you mean the Python file, Streamlit supports hot reloading with changing and saving the Python file. I assume that advanced questions about Streamlit should be sent to their forum.

Again, as I mentioned before, this tutorial is meant for building quick demos for research purposes. For production purposes, usually a REST API (with Flask or FastAPI) is created and the task of loading models will be (fully or partially) handled from there.

Kind regards,
Yasmin

Hello James,

Which config file are you referring to?

Personally, use streamlit as front end. And I have a flask app in a docker and my models in a “docker volume”.

My python code refer to the folder nomenclature and file names nomenclature to understand which models are available.

example:

folder structure:

models/languageName/model.bin

Code:

  • loop on any folder in /models
  • if there is a model.bin file within the folder consider that the language is available.

When I add a new model I don’t need to rebuild anything. I just need to upload the model in the model folder and right away streamlit has access to the model.

I’m referring to the json file which specifies the models and their settings usually found in the folder, available_models.
Currently, any new addition of a model requires me to edit the config file, kill the server and restart it. I’m wondering if there’s a way to hot load models when the new model is called from automatic reading the newly edited conf file.
(I might be in the wrong thread since I’m looking from an API perspective)

I’ll look into Streamlit (GitHub - ymoslem/CTranslate-NMT-Web-Interface: Machine Translation (MT) Web Interface for OpenNMT and FairSeq models using CTranslate and Streamlit)

Dear James,

I assume you are talking about Simple OpenNMT-py REST server. This REST API uses Flask. In my experience, the task of auto-reloading in Flask is not as straightforward as it is in FastAPI. Still, you can have a look at the answers in this discussion.

All the best,
Yasmin

Hello,

James really seem to be doing exactly what I some what already done.

I don’t need to reload my API when I upload new models.

here some information that could be helpful:

Best regards,
Samuel

Hi Samuel!

I assume you are using FastAPI, right? In FastAPI, one can just use the flag --reload

Kind regards,
Yasmin

Hello Yasmin,

No i made a pure flask api in the end. I have a flask api that serves my models and i have an another api with streamlit that serves has UI (user interface). The UI call the translating API to get the translation and provide the information of the source and target language and the text to be translated. The translating api can also be called to provide the list of languages pair supported.
Best regards,
Samuel

1 Like

Hi @ymoslem, thanks for this tutorial… its excellent. I do have one question… I was able to get the app working using my own trained model. Following the tutorial, I took the model pt file and converted it to a CTranslate2 model using ct2-opennmt-py-converter and it works fine.

My question… should one first run onmt_release_model on the pt file before running the ct2-opennmt-py-converter to remove the training only parameters, or does the c2 converter do that already?

Even better, you can convert directly to CT2 format with the onmt_release_model (check the -format and -quantization args).

Thanks @francoishernandez, for the reply. So when I use the following command:

onmt_release_model --model ms_35.pt -o test.pt --quantization int8 --format pytorch

It works with no errors, but when changing the output format to ctranslate2, it generates an error. I am wondering if I need to compile OpenNMT-py using an option flag?

Traceback (most recent call last):
File “/Users/cryptik/.virtualenvs/opennmt-pv1/bin/onmt_release_model”, line 8, in
sys.exit(main())
File “/Users/cryptik/.virtualenvs/opennmt-pv1/lib/python3.8/site-packages/onmt/bin/release_model.py”, line 59, in main
converter.convert(opt.output, model_spec, force=True,
File “/Users/cryptik/.virtualenvs/opennmt-pv1/lib/python3.8/site-packages/ctranslate2/converters/converter.py”, line 53, in convert
model_spec.validate()
File “/Users/cryptik/.virtualenvs/opennmt-pv1/lib/python3.8/site-packages/ctranslate2/specs/model_spec.py”, line 265, in validate
if self._vmap is not None and not os.path.exists(self._vmap):
File “/usr/local/opt/python@3.8/bin/…/Frameworks/Python.framework/Versions/3.8/lib/python3.8/genericpath.py”, line 19, in exists
os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not TransformerSpec

There may be a mismatch in your OpenNMT-py // CTranslate2 versions. Are you up to date on both?

Some significant changes were introduced here to allow CT2>=2.0.0 support.

I use this command to release the model in the CTranslate2 format. Sometimes I average the last, best models beforehand.

onmt_release_model --model model.pt --output un_fren --format ctranslate2 --quantization int8

I would like just to clarify that if you use this tutorial, please use the code here as it integrates changes suggested by Guillaume, Samuel and other colleagues:

I will edit the original tutorial as soon as possible.

All the best,
Yasmin

2 Likes

Hi @ymoslem, @francoishernandez… again, thanks for the help. Per your note @francoishernandez, I checked versions… and I was running CTranslate2 v 2.10.0 and OpenNMT 2.2.0. I upgraded ct2 to 2.10.1 and the above error went away.

@ymoslem, I was actually using a different implementation for my test translation web app. I needed it to run as part of a larger server side application (rather than streamlit) but I am using parts of your excellent python code. I refactored what I had based on the link you provided. Thanks again!

One question relative to quantization in the onmt_release_model converter… I understand what the end result of the quantization does, but does the accuracy of the model decrease when using say ‘int8’ vs ‘float16’?

@ymoslem you mentioned in your post that sometimes you average the last best models. What is the process to average a set of model.pt files?