Size of 'h' is zero during beam search (in Beam.lua)

I got error when I run translation command below :

command : th translate.lua -replace_unk -model $model
-src $test_src_tok -output exp/$expnum/test.tok -gpuid 1
(src text is obtained by tools/tokenize.lua, model is obtained by train.lua)

error : torch/Tensor.lua:462: bad argument #1 to ‘set’ (expecting number or Tensor or Storage)
./onmt/translate/Beam.lua:107: in function ‘func’
./onmt/utils/Tensor.lua:12: in function ‘recursiveApply’
./onmt/utils/Tensor.lua:7: in function ‘selectBeam’
./onmt/translate/Beam.lua:312: in function ‘_nextState’
./onmt/translate/Beam.lua:301: in function ‘_nextBeam’
./onmt/translate/BeamSearcher.lua:95: in function ‘_findKBest’
./onmt/translate/BeamSearcher.lua:58: in function ‘search’
./onmt/translate/Translator.lua:196: in function ‘translateBatch’
./onmt/translate/Translator.lua:269: in function ‘translate’

Error occurs because ‘h’ (torch.CudaTensor) have size 0
(i.e. print(#h) returns [torch.LongStorage of size 0] )

Can you give some hint what might be problem in this case?

I really appreciate your comment.

Do you have local modifications?

Also, is your Torch installation up-to-date?

Yes. Both OpenNMT & Torch7 is up-to-date.

I updated OpenNMT with ‘git pull’
–> Already up-to-date.

I updated torch7 with
luarocks torch
luarocks nn
luarocks cutorch
luarocks cunn
without error.

Result is same.

The trained model comes from OpenNMT older version (about two weeks ago), but without any modification to source code.

In case you look into more about this issue, I attach trained model and test text below.

Test Command :
th translate.lua -replace_unk -model smallmodel_epoch7_7.10.t7
-src -output output.txt -gpuid 1

Thank you, that is really helpful.

However, I did not encounter any issue when translating your test data. Maybe you should just reinstall Torch from scratch.