Inferencing with translate.py

Hi all, currently I am trying to inference a model that I trained with opennmt-py using the big translate.py script. Anyone knows if I am able to replace the arguments in the command line with manual input arguments? Or any advice would be appreciated

Hello!

Have you tried CTranslate2?

All the best,
Yasmin

Hey

I have tried but my model was not trained with a transformer, therefore I think I am not able to used it? I may be wrong, can you advice?

CTranslate2 works for Transformer models only. Could you please elaborate on what you are trying to achieve?

Understood that because I did not use a transformer model I am not able to utilised CTRANSLATE2. Currently what I am trying to do achieve is that I want to use fastAPI as my service to get an inference to the model that I have trained.

What you’re looking for is probably the REST server, server.py, not translate.py.

1 Like

Hey I have tried this, but faced some error with the conf.json file. Is it required to have all the keys in the conf.json file such as the tokenizer?