Simple OpenNMT-py REST server

This topic was automatically opened after 21 hours.


hello @pltrdy ,
while running the server i got the above error. Your help in solving the issue will be appriciated. Thanks.

Hello @park, while starting the server i get the following error, yet my available_models directory is in the path: ng@ng:~OpenNMT-py/ please help, thanks.
python3 server.py --ip 0.0.0.0 --port 5000 --url_root “./translator” --config “./available_models/conf.json”
Traceback (most recent call last):
File “server.py”, line 129, in
debug=args.debug)
File “server.py”, line 24, in start
translation_server.start(config_file)
File “/home/ng/OpenNMT-py/onmt/translate/translation_server.py”, line 80, in start
with open(self.config_file) as f:
FileNotFoundError: [Errno 2] No such file or directory: ‘“./available_models/conf.json”’

I think i just figured out where the issue might be; i had not input my trained model. Working on that.

Okay! Nice Work

Hi Park,

I am getting the following error message while loading the tokenizer.

**[kishor@nvidiagpu OpenNMT-py]$ python3 server.py --ip “0.0.0.0” --port 7785 --url_root “/translator” --config “./available_models/conf_pyonmttok.json”
Pre-loading model 1
[2019-06-25 16:11:18,950 INFO] Loading model 1
[2019-06-25 16:11:19,622 INFO] Loading tokenizer
Traceback (most recent call last):
File “server.py”, line 129, in
debug=args.debug)
File “server.py”, line 24, in start
translation_server.start(config_file)
File “/home/kishor/OpenNMT/OpenNMT-py/onmt/translate/translation_server.py”, line 102, in start
self.preload_model(opt, model_id=model_id, **kwargs)
File “/home/kishor/OpenNMT/OpenNMT-py/onmt/translate/translation_server.py”, line 140, in preload_model
model = ServerModel(opt, model_id, model_kwargs)
File “/home/kishor/OpenNMT/OpenNMT-py/onmt/translate/translation_server.py”, line 227, in init
self.load()
File “/home/kishor/OpenNMT/OpenNMT-py/onmt/translate/translation_server.py”, line 308, in load
import pyonmttok
ImportError: /usr/local/lib/python3.5/site-packages/pyonmttok.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _PyThreadState_UncheckedGet
[kishor@nvidiagpu OpenNMT-py]$

Could you please assist me in resolving this issue.

Regards,
Kishor.

It seems pyonmttok is not install well.
I suggest you to install OpennmtTokenizer again.
Or I suggest you to tokenize your data used by SentencePiece.
In my case I use SentencePiece model.

okay…got it…
i will try these.

Hi @pltrdy, I’m trying to run the server with arabic<>english pair and have some troubles with the tokenization. I have BPE codes, generated with learn_bpe.py and applied to my corpora with apply_bpe.py (all with default options). My conf.json regarding tokenization looks like this:

...
"tokenizer": {
    "type": "pyonmttok",
    "mode": "none",
    "params": {
        "joiner": "@@",
        "joiner_annotate": true,
        "bpe_model_path": "cee81450-e9af-0137-849b-107b44b00092/data.eng.bpe"
    }
},
...

Input:

Oh. Okay. Because I thought it was something different.
Greg barber.

pyonmttok-ouput:

O@@ h@@ .@@ @@ Okay@@ .@@ @@ Bec@@ aus@@ e@@ @@ I@@ @@ though@@ t@@ @@ it@@ @@ was@@ @@ some@@ thing@@ @@ different.
G@@ reg@@ @@ bar@@ ber.

apply_bpe-output:

Oh. Okay. Because I thought it was something different.
Gre@@ g bar@@ ber.

There is difference between tokenizations. Maybe I’m not configuring something right? Whats the difference between joiner (pyonmttok) and separator (apply_bpe). What to do to get the tokenization from apply_bpe.py to the opennmt-py server ? @park any thoughts I saw you was working with BPE yourself ?

2 Likes

Hi,
Which is the recommend way to use the server?
Let’s say I have 10000 sentences to translate:

  1. Send all the 10000 senteces together and let the server handle the batches
  2. Create batches of 32 (example) and send them to the server

With the second option it seems batches in server side are not necessary
Thanks

Hi, thank`s for sharing. I read a lot of papers last time.
The steps are not logical for me.
I have 2 *.txt files for 2 languages (parallel corpora).
On which point comes sentencepiece in the game. Must i train/prepare both txt files with sentencepiece or first preprocess with OpenNMT-py and then train the same txt source in a single file with sentencepiece? or?

What kind of source stay behind that procedure to realize to train a model with the sentencepiece option in OpenNMT-py?

I found nothing about that.
Many thanks for Information
lmsverige

The OpenNMT-py preprocess expects tokenized data. So SentencePiece should be applied before using OpenNMT-py.

Does that help?

You might want to have a look at this post for instance: Using Sentencepiece/Byte Pair Encoding on Model

I’ve same problem, my server works fine but I’m working on windows which does not support pyonmttok so I configured json file with sentencepiece and the sentencepiece model is a pretrained wikipedia tokenizer trained with 275 languages. Yet the predictions of the sentences are extremely poor, not even one sentence translates correctly but however it works just fine from the notebook view.

Everything is fine with the given REST-api. However, adding
“opt”: {…, “replace_unk”: true,
“phrase_table”: “a_phr_table”,…}
does not give the desired result. The untranslated src token is just copied as it is, even if it is present in the table. That works fine in normal command line onmt_translate without REST-api.
Any help to resolve this issue will be appreciated.

The server is just wrapping the same function as the command line tool so it’s not expected. Are all the other opts exactly the same in your two configurations?
If the problem persists, you might want to create an issue with additional details and a way to reproduce ideally.

Thanks François for your quick response.
I figured out that it works only when I give the full path of the phr_table.txt in conf.json file though it is placed in the same available_models/ directory.
“opt”: {
“…
“replace_unk”: true, “phrase_table”:”/home/xyz/available_models/phr_table.txt",

“verbose”: true }

2 posts were split to a new topic: Error after converting a Fairseq model

A post was split to a new topic: Error with latest CTranslate2 version in OpenNMT-py REST server

Hi @park , @guillaumekln and all,
I trained a very small model by following the tutorial successfully, I test translation step by using ‘onmt_translate’ command, it can work without problem, I mean I can get predicted translations from this model.

Then I try to test the deployment step. I ran command below:
python3 server.py --ip 0.0.0.0 --port 5000 --url_root “/translator” --config “./available_models/conf.json”
It got stuck in the following step at Loading data into the model, I waited for several hours, it still no any progress although my model is very small. I tried on another server my side, I got same issue.
My folder structure as below:


The conf.json is simple:
{
“models_root”: “./available_models”,
“models”: [
{
“id”: 1000,
“model”: “model_step_3000.pt”,
“timeout”: 600,
“on_timeout”: “to_cpu”,
“load”: true,
“opt”: {
“gpu”: 0,
“beam_size”: 5
}
}
]
}
Any suggestions and clues would be much appreciated.
Regards,
Jim