OpenNMT Forum

Issues running the OpenNMT-py REST server

Hi,

I downloaded the server.py from the url “https://github.com/OpenNMT/Server/
I ran the following command:
python server.py --ip $IP --port $PORT --url_root $URL_ROOT --config $CONFIG

I get the following error message :
administrator@FC-PUN01-AICOE:~/OpenNMT/Server$ python3 server.py --ip “0.0.0.0” --port “7785” --url_root “/translator” --config "./available_models/conf.json"
usage: server.py [-h] [–nodebug NODEBUG] [–port PORT] [–nocache NOCACHE]
** [-dir DIR]**
server.py: error: unrecognized arguments: --ip 0.0.0.0 --url_root /translator --config ./available_models/conf.json

Could you please let me know the url for downloading the server ?

Please take this on priority as I am trying to build a german to english translation server.

Thank You,
Kishor.

https://github.com/OpenNMT/Server/ is an old way of serving OpenNMT-lua models. It has nothing to do with OpenNMT-py and this tutorial.

Hi,

Please send me the url of OpenNMT-py Server which could be used to server OpenNMT-py models.
This would help me build a Simple OpenNMT-py Rest Server locally.

Please send me the url.

Thank You,
Kishor.

I’m using this configuration:

{
    "models_root": "./available_models",
    "models": [
        {
            "id": 101,
            "name": "PT-EN (bidirectional encoder of Long Short-Term Memory)",
            "model": "BiLSTM2-650-600-Brasil_Celex.pt",
            "dynamic_dict": true,
            "timeout": 1000,
            "on_timeout": "to_cpu",
            "model_root": "/opt/models",
            "load": true,
            "opt": {
                "gpu": 0,
                "beam_size": 5,
                "max_length": 650,
                "batch_size": 32,
                "share_vocab": true,
                "replace_unk": true,
                "verbose": true
            },  
            "tokenizer": {
                "type": "pyonmttok",
                "mode": "aggressive",
                "params": {
                    "no_substitution": false,
                    "joiner_annotate": true,
                    "joiner_new": false,
                    "case_markup": true,
                    "preserve_placeholders": true,
                    "preserve_segmented_tokens": true,
                    "segment_case": true,
                    "segment_numbers": true,
                    "segment_alphabet_change": false
                }
            }
        }
    ]
}

The name (name_model) was not being returned into get response to the web page.
Sending more than one line, the result was losing the line feed. Then, the tokenization and the translation were spending more time than expected, cause they were working with a big sentence.
I changed translation_server.py and it’s working fine.

Using the default values, you don’t need to set the parameters.

KishorKP, Your command can be just: python3 server.py --port 7785

Okay…let me try that.

The tokenizer parameters must be the same you have used before the training process which yielded the model.
Once I was using the Lua tokenizer previously, I create a tokenizer.py and detokenizer.py that uses the pyonmttok library, with the same parameters.

1 Like

Hi,

I did not create the models. These are pre-trained models downloaded from
http://opennmt.net/Models-py/.

So I am not sure what should be the tokenizer parameters.

Thank you

Hi,

I am able to send requests to the server, but I am receiving 404 error as shown below :
administrator@FC:~/OpenNMT/Server$ python3 server.py --nodebug NODEBUG --port "7785"
** * Serving Flask app “server” (lazy loading)**
** * Environment: production**
** WARNING: Do not use the development server in a production environment.**
** Use a production WSGI server instead.**
** * Debug mode: off**
** * Running on http://0.0.0.0:7785/ (Press CTRL+C to quit)**
127.0.0.1 - - [13/Jun/2019 12:47:31] “POST /translator/translate HTTP/1.1” 404 -
127.0.0.1 - - [13/Jun/2019 12:51:44] “POST /translator/translate HTTP/1.1” 404 -
127.0.0.1 - - [13/Jun/2019 12:52:53] “POST /translator/translate HTTP/1.1” 404 -
127.0.0.1 - - [13/Jun/2019 13:00:02] “POST /translator/translate HTTP/1.1” 404 -
127.0.0.1 - - [13/Jun/2019 13:09:59] “GET /translator/models HTTP/1.1” 404 -
127.0.0.1 - - [13/Jun/2019 13:13:26] “GET /translator/models HTTP/1.1” 404 -
127.0.0.1 - - [13/Jun/2019 13:14:19] “GET /translator/models HTTP/1.1” 404 -
127.0.0.1 - - [13/Jun/2019 13:20:49] “GET /translator/translate HTTP/1.1” 404 -
127.0.0.1 - - [13/Jun/2019 13:23:13] “POST /translator/translate HTTP/1.1” 404 -
127.0.0.1 - - [13/Jun/2019 13:24:13] “POST /translator/translate HTTP/1.1” 404 -

What may be the issue ?

One more doubt:

  1. Where should conf.json and available_models directory(with models listed in json) should be placed relatively to server.py ?

Please do respond .

Thank you,
Kishor.

In my case server.py is place on

/data/home/chanjun_park/OpenNMT-py

available_models place on

/data/home/chanjun_park/OpenNMT-py/available_models

Try commnd like below

python3 server.py --ip 0.0.0.0 --port 5000 --url_root “/translator” --config “./available_models/conf.json”

Hi,

I have cloned a wrong path for OpenNMT.. i.e . (https://github.com/OpenNMT/Server/ )

Please let me know the new cloning path for OpenNMT-py…

Thank You,
Kishor

Hi,

Thank you very much for the server link.
In the link : https;//github.com/OpenNMT/OpenNMT-py/tree/master/onmt

As I do not find any option to download or clone in the above url.
How do I clone the onmt code and build a module for it to use in server.py code ?

Please assist me in resolving this issue.

Thank You,
Kishor.

Please git clone opennmt pytorch
If you did already please git pull

Hi,

Thanks a ton for this info.

Regards,
Kishor.

Do the tests with curl recommended by pltrdy in the same computer where the service is running to check if it’s responding with the models for instance. May be, the port is blocked.

Hi,

I will check these things and proceed with tests using curl as recommended by pltrdy.

Thank You,
Kishor.

Hi,

While running the server , I am getting the following error message:

It says there is no GPU on the system. How do i go ahead wit gpu option disabled?
Please assist me here as I need to bring this serve up as soon as possible.

Thank You,
Kishor.

Just change the gpu option in “OpenNMT-py folder”/available_models/conf.json :

{
    "models_root": "./available_models",
    "models": [
        {
    ...
                "opt": {
                    "gpu": -1,
    ...
         }
    ]
}

Hi ,

  Thanks a  lot for all the support.  GPU error and other errors are resolved now. Presently I am facing the following error message:

administrator@:~/OpenNMT/OpenNMT-py$ python3 server.py --ip “0.0.0.0” --port “7785” --url_root “/translator” --config "./available_models/conf.json"
Pre-loading model 1
[2019-06-18 11:38:25,863 INFO] Loading model 1
[2019-06-18 11:38:32,824 INFO] Loading tokenizer
Traceback (most recent call last):
** File “server.py”, line 123, in **
** debug=args.debug)**
** File “server.py”, line 24, in start**
** translation_server.start(config_file)**
** File “/home/administrator/OpenNMT/OpenNMT-py/onmt/translate/translation_server.py”, line 102, in start**
** self.preload_model(opt, model_id=model_id, kwargs)
** File “/home/administrator/OpenNMT/OpenNMT-py/onmt/translate/translation_server.py”, line 140, in preload_model**
** model = ServerModel(opt, model_id, model_kwargs)
** File “/home/administrator/OpenNMT/OpenNMT-py/onmt/translate/translation_server.py”, line 227, in init**
** self.load()**
** File “/home/administrator/OpenNMT/OpenNMT-py/onmt/translate/translation_server.py”, line 320, in load**
** tokenizer_params)
TypeError: init(): incompatible constructor arguments. The following argument types are supported:
** 1. pyonmttok.Tokenizer(mode: str, bpe_model_path: str=’’, bpe_vocab_path: str=’’, bpe_vocab_threshold: int=50, vocabulary_path: str=’’, vocabulary_threshold: int=0, sp_model_path: str=’’, sp_nbest_size: int=0, sp_alpha: float=0.1, joiner: str=‘■’, joiner_annotate: bool=False, joiner_new: bool=False, spacer_annotate: bool=False, spacer_new: bool=False, case_feature: bool=False, case_markup: bool=False, no_substitution: bool=False, preserve_placeholders: bool=False, preserve_segmented_tokens: bool=False, segment_case: bool=False, segment_numbers: bool=False, segment_alphabet_change: bool=False, segment_alphabet: list=[])**

Invoked with: ‘conservative’; kwargs: no_substitution=False, joiner_annotate=True, joiner_new=False, case_markup=True, preserver_placeholders=True, preserver_segmented_tokens=True, segment_case=True, segment_numbers=True, segment_alphabet_change=False
administrator@:~/OpenNMT/OpenNMT-py$

Could you please assist me in resolving this issue ?

Thank you,
Kishor.

try git pull and restart the server.py