OpenNMT Forum

Erro on running sever,py

server.py: error: argument --ip: expected one argument
(base) mudaki@Mudaki:~/OpenNMT-py$ export IP=“0.0.0.0”
(base) mudaki@Mudaki:~/OpenNMT-py$ export PORT=5000
(base) mudaki@Mudaki:~/OpenNMT-py$ export URL_ROOT="/translator"
(base) mudaki@Mudaki:~/OpenNMT-py$ export CONFIG="./available_models/example.conf.json"
(base) mudaki@Mudaki:~/OpenNMT-py$ python server.py --ip $IP --port $PORT --url_root $URL_ROOT --config $CONFIG
Pre-loading model 100
[2019-09-12 10:10:09,588 INFO] Loading model 100
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1556653000816/work/aten/src/THC/THCGeneral.cpp line=51 error=30 : unknown error
Traceback (most recent call last):
File “server.py”, line 129, in
debug=args.debug)
File “server.py”, line 24, in start
translation_server.start(config_file)
File “/home/mudaki/OpenNMT-py/onmt/translate/translation_server.py”, line 103, in start
self.preload_model(opt, model_id=model_id, **kwargs)
File “/home/mudaki/OpenNMT-py/onmt/translate/translation_server.py”, line 141, in preload_model
model = ServerModel(opt, model_id, **model_kwargs)
File “/home/mudaki/OpenNMT-py/onmt/translate/translation_server.py”, line 231, in init
self.load()
File “/home/mudaki/OpenNMT-py/onmt/translate/translation_server.py”, line 288, in load
raise ServerModelError(“Runtime Error: %s” % str(e))
onmt.translate.translation_server.ServerModelError: Runtime Error: cuda runtime error (30) : unknown error at /opt/conda/conda-bld/pytorch_1556653000816/work/aten/src/THC/THCGeneral.cpp:51

my configuration file is

{
“models_root”: “./available_models”,
“models”: [
{
“id”: 100,
“model”: “demo-model_step_5000.pt”,
“timeout”: 600,
“on_timeout”: “to_cpu”,
“load”: true,
“opt”: {
“gpu”: 0,
“beam_size”: 5
},
“tokenizer”: {
“type”: “sentencepiece”,
“model”: “wmtenfr.model”
}
},{
“model”: “demo-model_step_10000.pt”,
“timeout”: -1,
“on_timeout”: “unload”,
“model_root”: “…/other_models”,
“opt”: {
“batch_size”: 1,
“beam_size”: 10
}
}
]
}

It seems there is an issue with your CUDA setup.
Do you manage to run anything (translate / train / other toolkits maybe) on GPU on the same setup?

my cuda had some issues so i decided to reinstall opennmt-py with anaconda 3.7 along with pytorch.Now when i run server.py i get the following error

(base) victor@mudaki:~/OpenNMT-py$ python server.py --ip $IP --port $PORT --url_root $URL_ROOT --config $CONFIG
Traceback (most recent call last):
File “server.py”, line 129, in
debug=args.debug)
File “server.py”, line 24, in start
translation_server.start(config_file)
File “/home/victor/OpenNMT-py/onmt/translate/translation_server.py”, line 81, in start
self.confs = json.load(f)
File “/home/victor/anaconda3/lib/python3.7/json/init.py”, line 296, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File “/home/victor/anaconda3/lib/python3.7/json/init.py”, line 348, in loads
return _default_decoder.decode(s)
File “/home/victor/anaconda3/lib/python3.7/json/decoder.py”, line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File “/home/victor/anaconda3/lib/python3.7/json/decoder.py”, line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid control character at: line 6 column 46 (char 137)

MY conf.json is

{
“models_root”: “./available_models”,
“models”: [
{
“id”: 100,
“model”: "demo-model_step_5000.pt
",
“timeout”: 600,
“on_timeout”: “to_cpu”,
“load”: true,
“opt”: {
“gpu”: 0,
“beam_size”: 5
},
“tokenizer”: {
“type”: “sentencepiece”,
“model”: “wmtenfr.model”
}
},{
“model”: "demo-model_step_000.pt
",
“timeout”: -1,
“on_timeout”: “unload”,
“model_root”: “…/other_models”,
“opt”: {
“batch_size”: 1,
“beam_size”: 10
}
}
]
}

i have cleared the error above by removing the blank space between model.pt and closing double quotes(’’).thank you