After training successfully on the opensub dataset from the English chatbot tutorial, I get an error trying to test the model. Here’s what I did for the preprocessing and training:
th preprocess.lua -train data/opensub_qa_en/train.txt -valid data/opensub_qa_en/valid.txt -save_data data/opensub_qa_en -data_type monotext
th train.lua -gpuid 1 -data data/opensub_qa_en/train.t7 -save_model data/opensub_qa_en/model -rnn_size 256 -model_type lm
th preprocess.lua -train data/opensub_qa_en/train.txt -valid data/opensub_qa_en/valid.txt -save_data data/opensub_qa_en -data_type monotext
th train.lua -gpuid 1 -data data/opensub_qa_en/train.t7 -save_model data/opensub_qa_en/model -rnn_size 256 -model_type lm
th translate.lua -model data/opensub_qa_en/model_final.t7 -gpuid 1 -src data/opensub_qa_en/test-src.txt -output data/opensub_qa_en/file-tgt.tok -tgt data/opensub_qa_en/test-tgt.txt
[03/15/17 10:52:07 INFO] Using GPU(s): 1
[03/15/17 10:52:07 WARNING] The caching CUDA memory allocator is enabled. This allocator improves performance at the cost of a higher GPU memory usage. To optimize for memory, consider disabling it by setting the environment variable: THC_CACHING_ALLOCATOR=0
[03/15/17 10:52:07 INFO] Loading 'data/opensub_qa_en/model_final.t7'...
/home/brian/torch/install/bin/luajit: ./onmt/modules/Decoder.lua:63: attempt to index local 'pretrained' (a nil value)
stack traceback:
./onmt/modules/Decoder.lua:63: in function 'loadDecoder'
./onmt/translate/Translator.lua:40: in function '__init'
/home/brian/torch/install/share/lua/5.1/torch/init.lua:91: in function 'new'
translate.lua:50: in function 'main'
translate.lua:170: in main chunk
[C]: in function 'dofile'
...rian/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
I get the same error with the released model and using CPU.