Yesterday I did git pull and I train a model with tokenized dataset and bpe. I’m trying to translate with:
th translate.lua -src ${path}/${test_} -output ${path}/models/pred_dev/pred.tok.${f}.txt -tok_tgt_detokenize_output true -tok_tgt_joiner_annotate true -tok_tgt_case_feature true -model ${path}/models/${f}\ -gpuid 1
and I get the following error:
[12/21/17 14:45:53 INFO] Using GPU(s): 1
[12/21/17 14:45:53 WARNING] The caching CUDA memory allocator is enabled. This allocator improves performance at the cost of a higher GPU memory usage. To optimize for memory, consider disabling it by setting the environment variable: THC_CACHING_ALLOCATOR=0
[12/21/17 14:45:53 INFO] Loading ‘/home/German/datasets/EN-ES/exp17_12_20/models/_epoch10_3.18.t7’…
[12/21/17 14:45:54 INFO] Model seq2seq trained on bitext
[12/21/17 14:45:54 INFO] Using on-the-fly ‘space’ tokenization for input 1
[12/21/17 14:45:54 INFO] Using on-the-fly ‘space’ tokenization for input 2
[12/21/17 14:45:55 INFO] SENT 1: excuse│C me│L ■,│N do│L you│L have│L the│L time│L ■?│N @│N ■@│N
/home/torch/install/bin/luajit: translate.lua:258: attempt to index local ‘outFile’ (a nil value)
stack traceback:
translate.lua:258: in function 'main’
translate.lua:348: in main chunk
[C]: in function ‘dofile’
/home/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
What am I doing wrong?
Regards