Is there a way to use the tokenizer/detokenizer in the tools directory during translation?
How would that be configured in conf.json? Here is the default settings:
"tokenizer": {
"type": "sentencepiece",
"model": "wmtenfr.model"
}
Is there a way to use the tokenizer/detokenizer in the tools directory during translation?
How would that be configured in conf.json? Here is the default settings:
"tokenizer": {
"type": "sentencepiece",
"model": "wmtenfr.model"
}
Which tokenizer are you refering to?
The server supports the type sentencepice
and pyonmttok
. The later refers to the OpenNMT tokenizer:
OK, so pyonmttok is the OpenNMT out of the box tokenizer.
What’s the difference between sentencepiece and pyonmttok? Can you point me to some documentation?
It can apply SentencePiece and more. See some documentation here:
I have some Question.
I used only the Tokenizer and BPE provided by openNMT Tools as preprocessing.
How can I apply the Tokenizer provised by tools and BPE to server.py?
How do I change the example below?
“tokenizer”: {
“type”: “sentencepiece”,
“model”: “wmtenfr.model”
}
when I use the command below
python3 server.py --ip 0.0.0.0 --port 5000 --url_root “/translator” --config “./available_models/conf.json”
This error is occur
TypeError: unorderable types: list() < int()
What should I do.
<conf.json>
{
“models_root”: “./available_models”,
“models”: [
{
“id”: 1000,
“model”: “model_step_30000.pt”,
“timeout”: 600,
“on_timeout”: “to_cpu”,
“load”: true,
“opt”: {
“gpu”: 0,
“beam_size”: 5
},
“tokenizer”: {
“type”: “pyonmttok”,
“model”: “src.code”
}
}
]
}
Should be something like:
"tokenizer": {
"type": "pyonmttok",
"mode": "conservative",
"params": {
...
}
}
where params
can be any arguments from here:
Hello.
I have some Question about server.py
Command:
python3 server.py --ip 0.0.0.0 --port 5000 --url_root “./translator” --config “./available_models/conf.json”
Error:
Pre-loading model 100
[2019-06-03 11:17:06,458 INFO] Loading model 100
Traceback (most recent call last):
File “server.py”, line 123, in
debug=args.debug)
File “server.py”, line 24, in start
translation_server.start(config_file)
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/translate/translation_server.py”, line 102, in start
self.preload_model(opt, model_id=model_id, **kwargs)
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/translate/translation_server.py”, line 140, in preload_model
model = ServerModel(opt, model_id, **model_kwargs)
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/translate/translation_server.py”, line 227, in init
self.load()
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/translate/translation_server.py”, line 282, in load
os.devnull, “w”, “utf-8”))
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/translate/translator.py”, line 28, in build_translator
fields, model, model_opt = load_test_model(opt)
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/model_builder.py”, line 99, in load_test_model
opt.gpu)
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/model_builder.py”, line 128, in build_base_model
src_emb = build_embeddings(model_opt, src_field)
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/model_builder.py”, line 53, in build_embeddings
fix_word_vecs=fix_word_vecs
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/modules/embeddings.py”, line 167, in init
pe = PositionalEncoding(dropout, self.embedding_size)
File “/data/home/chanjun_park/work/OpenNMT-py/onmt/modules/embeddings.py”, line 35, in init
self.dropout = nn.Dropout(p=dropout)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/torch/nn/modules/dropout.py”, line 11, in init
if p < 0 or p > 1:
TypeError: unorderable types: list() < int()
I don’t know why this is happening.
Is there any solution???
Hello.
opennmt-py server tps is low,how can i do?
I hope someone can help me .thanks.
We changed the dropout option from int to list.
I think your model has been trained with the new list thing but you must have exposed your model on a server.py that has notbeed updated with master.
git pull on your server and let me know.
Thank you so much.
Problem solved.
This topic was automatically opened after 21 hours.
Hello @park, while starting the server i get the following error, yet my available_models directory is in the path: ng@ng:~OpenNMT-py/ please help, thanks.
python3 server.py --ip 0.0.0.0 --port 5000 --url_root “./translator” --config “./available_models/conf.json”
Traceback (most recent call last):
File “server.py”, line 129, in
debug=args.debug)
File “server.py”, line 24, in start
translation_server.start(config_file)
File “/home/ng/OpenNMT-py/onmt/translate/translation_server.py”, line 80, in start
with open(self.config_file) as f:
FileNotFoundError: [Errno 2] No such file or directory: ‘“./available_models/conf.json”’
I think i just figured out where the issue might be; i had not input my trained model. Working on that.