Issues running the OpenNMT-py REST server


I downloaded the from the url “
I ran the following command:
python --ip $IP --port $PORT --url_root $URL_ROOT --config $CONFIG

I get the following error message :
administrator@FC-PUN01-AICOE:~/OpenNMT/Server$ python3 --ip “” --port “7785” --url_root “/translator” --config "./available_models/conf.json"
usage: [-h] [–nodebug NODEBUG] [–port PORT] [–nocache NOCACHE]
** [-dir DIR]** error: unrecognized arguments: --ip --url_root /translator --config ./available_models/conf.json

Could you please let me know the url for downloading the server ?

Please take this on priority as I am trying to build a german to english translation server.

Thank You,
Kishor. is an old way of serving OpenNMT-lua models. It has nothing to do with OpenNMT-py and this tutorial.


Please send me the url of OpenNMT-py Server which could be used to server OpenNMT-py models.
This would help me build a Simple OpenNMT-py Rest Server locally.

Please send me the url.

Thank You,

I’m using this configuration:

    "models_root": "./available_models",
    "models": [
            "id": 101,
            "name": "PT-EN (bidirectional encoder of Long Short-Term Memory)",
            "model": "",
            "dynamic_dict": true,
            "timeout": 1000,
            "on_timeout": "to_cpu",
            "model_root": "/opt/models",
            "load": true,
            "opt": {
                "gpu": 0,
                "beam_size": 5,
                "max_length": 650,
                "batch_size": 32,
                "share_vocab": true,
                "replace_unk": true,
                "verbose": true
            "tokenizer": {
                "type": "pyonmttok",
                "mode": "aggressive",
                "params": {
                    "no_substitution": false,
                    "joiner_annotate": true,
                    "joiner_new": false,
                    "case_markup": true,
                    "preserve_placeholders": true,
                    "preserve_segmented_tokens": true,
                    "segment_case": true,
                    "segment_numbers": true,
                    "segment_alphabet_change": false

The name (name_model) was not being returned into get response to the web page.
Sending more than one line, the result was losing the line feed. Then, the tokenization and the translation were spending more time than expected, cause they were working with a big sentence.
I changed and it’s working fine.

Using the default values, you don’t need to set the parameters.

KishorKP, Your command can be just: python3 --port 7785

Okay…let me try that.

The tokenizer parameters must be the same you have used before the training process which yielded the model.
Once I was using the Lua tokenizer previously, I create a and that uses the pyonmttok library, with the same parameters.

1 Like


I did not create the models. These are pre-trained models downloaded from

So I am not sure what should be the tokenizer parameters.

Thank you


I am able to send requests to the server, but I am receiving 404 error as shown below :
administrator@FC:~/OpenNMT/Server$ python3 --nodebug NODEBUG --port "7785"
** * Serving Flask app “server” (lazy loading)**
** * Environment: production**
** WARNING: Do not use the development server in a production environment.**
** Use a production WSGI server instead.**
** * Debug mode: off**
** * Running on (Press CTRL+C to quit)** - - [13/Jun/2019 12:47:31] “POST /translator/translate HTTP/1.1” 404 - - - [13/Jun/2019 12:51:44] “POST /translator/translate HTTP/1.1” 404 - - - [13/Jun/2019 12:52:53] “POST /translator/translate HTTP/1.1” 404 - - - [13/Jun/2019 13:00:02] “POST /translator/translate HTTP/1.1” 404 - - - [13/Jun/2019 13:09:59] “GET /translator/models HTTP/1.1” 404 - - - [13/Jun/2019 13:13:26] “GET /translator/models HTTP/1.1” 404 - - - [13/Jun/2019 13:14:19] “GET /translator/models HTTP/1.1” 404 - - - [13/Jun/2019 13:20:49] “GET /translator/translate HTTP/1.1” 404 - - - [13/Jun/2019 13:23:13] “POST /translator/translate HTTP/1.1” 404 - - - [13/Jun/2019 13:24:13] “POST /translator/translate HTTP/1.1” 404 -

What may be the issue ?

One more doubt:

  1. Where should conf.json and available_models directory(with models listed in json) should be placed relatively to ?

Please do respond .

Thank you,

In my case is place on


available_models place on


Try commnd like below

python3 --ip --port 5000 --url_root “/translator” --config “./available_models/conf.json”


I have cloned a wrong path for OpenNMT.. i.e . ( )

Please let me know the new cloning path for OpenNMT-py…

Thank You,


Thank you very much for the server link.
In the link : https;//

As I do not find any option to download or clone in the above url.
How do I clone the onmt code and build a module for it to use in code ?

Please assist me in resolving this issue.

Thank You,

Please git clone opennmt pytorch
If you did already please git pull


Thanks a ton for this info.


Do the tests with curl recommended by pltrdy in the same computer where the service is running to check if it’s responding with the models for instance. May be, the port is blocked.


I will check these things and proceed with tests using curl as recommended by pltrdy.

Thank You,


While running the server , I am getting the following error message:

It says there is no GPU on the system. How do i go ahead wit gpu option disabled?
Please assist me here as I need to bring this serve up as soon as possible.

Thank You,

Just change the gpu option in “OpenNMT-py folder”/available_models/conf.json :

    "models_root": "./available_models",
    "models": [
                "opt": {
                    "gpu": -1,

Hi ,

  Thanks a  lot for all the support.  GPU error and other errors are resolved now. Presently I am facing the following error message:

administrator@:~/OpenNMT/OpenNMT-py$ python3 --ip “” --port “7785” --url_root “/translator” --config "./available_models/conf.json"
Pre-loading model 1
[2019-06-18 11:38:25,863 INFO] Loading model 1
[2019-06-18 11:38:32,824 INFO] Loading tokenizer
Traceback (most recent call last):
** File “”, line 123, in **
** debug=args.debug)**
** File “”, line 24, in start**
** translation_server.start(config_file)**
** File “/home/administrator/OpenNMT/OpenNMT-py/onmt/translate/”, line 102, in start**
** self.preload_model(opt, model_id=model_id, kwargs)
** File “/home/administrator/OpenNMT/OpenNMT-py/onmt/translate/”, line 140, in preload_model**
** model = ServerModel(opt, model_id, model_kwargs)
** File “/home/administrator/OpenNMT/OpenNMT-py/onmt/translate/”, line 227, in init**
** self.load()**
** File “/home/administrator/OpenNMT/OpenNMT-py/onmt/translate/”, line 320, in load**
** tokenizer_params)
TypeError: init(): incompatible constructor arguments. The following argument types are supported:
** 1. pyonmttok.Tokenizer(mode: str, bpe_model_path: str=’’, bpe_vocab_path: str=’’, bpe_vocab_threshold: int=50, vocabulary_path: str=’’, vocabulary_threshold: int=0, sp_model_path: str=’’, sp_nbest_size: int=0, sp_alpha: float=0.1, joiner: str=‘■’, joiner_annotate: bool=False, joiner_new: bool=False, spacer_annotate: bool=False, spacer_new: bool=False, case_feature: bool=False, case_markup: bool=False, no_substitution: bool=False, preserve_placeholders: bool=False, preserve_segmented_tokens: bool=False, segment_case: bool=False, segment_numbers: bool=False, segment_alphabet_change: bool=False, segment_alphabet: list=[])**

Invoked with: ‘conservative’; kwargs: no_substitution=False, joiner_annotate=True, joiner_new=False, case_markup=True, preserver_placeholders=True, preserver_segmented_tokens=True, segment_case=True, segment_numbers=True, segment_alphabet_change=False

Could you please assist me in resolving this issue ?

Thank you,

try git pull and restart the