server.py: error: argument --ip: expected one argument
(base) mudaki@Mudaki:~/OpenNMT-py$ export IP=“0.0.0.0”
(base) mudaki@Mudaki:~/OpenNMT-py$ export PORT=5000
(base) mudaki@Mudaki:~/OpenNMT-py$ export URL_ROOT="/translator"
(base) mudaki@Mudaki:~/OpenNMT-py$ export CONFIG="./available_models/example.conf.json"
(base) mudaki@Mudaki:~/OpenNMT-py$ python server.py --ip $IP --port $PORT --url_root $URL_ROOT --config $CONFIG
Pre-loading model 100
[2019-09-12 10:10:09,588 INFO] Loading model 100
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1556653000816/work/aten/src/THC/THCGeneral.cpp line=51 error=30 : unknown error
Traceback (most recent call last):
File “server.py”, line 129, in
debug=args.debug)
File “server.py”, line 24, in start
translation_server.start(config_file)
File “/home/mudaki/OpenNMT-py/onmt/translate/translation_server.py”, line 103, in start
self.preload_model(opt, model_id=model_id, **kwargs)
File “/home/mudaki/OpenNMT-py/onmt/translate/translation_server.py”, line 141, in preload_model
model = ServerModel(opt, model_id, **model_kwargs)
File “/home/mudaki/OpenNMT-py/onmt/translate/translation_server.py”, line 231, in init
self.load()
File “/home/mudaki/OpenNMT-py/onmt/translate/translation_server.py”, line 288, in load
raise ServerModelError(“Runtime Error: %s” % str(e))
onmt.translate.translation_server.ServerModelError: Runtime Error: cuda runtime error (30) : unknown error at /opt/conda/conda-bld/pytorch_1556653000816/work/aten/src/THC/THCGeneral.cpp:51
my configuration file is
{
“models_root”: “./available_models”,
“models”: [
{
“id”: 100,
“model”: “demo-model_step_5000.pt”,
“timeout”: 600,
“on_timeout”: “to_cpu”,
“load”: true,
“opt”: {
“gpu”: 0,
“beam_size”: 5
},
“tokenizer”: {
“type”: “sentencepiece”,
“model”: “wmtenfr.model”
}
},{
“model”: “demo-model_step_10000.pt”,
“timeout”: -1,
“on_timeout”: “unload”,
“model_root”: “…/other_models”,
“opt”: {
“batch_size”: 1,
“beam_size”: 10
}
}
]
}