The error when continuing a training

I would like to continue a training with old data set. Here is my command:
th train.lua -data data/demo-train.t7 -save_model model/model -save_every 50 -train_from model/model_epoch24_5.36.t7 -continue -gpuid 5,6

I get this error:
[05/17/17 15:47:29 INFO] Using GPU(s): 5, 6
[05/17/17 15:47:29 WARNING] The caching CUDA memory allocator is enabled. This allocator improves performance at the cost of a higher GPU memory usage. To optimize for memory, consider disabling it by setting the environment variable: THC_CACHING_ALLOCATOR=0
[05/17/17 15:47:29 INFO] Training Sequence to Sequence with Attention model…
[05/17/17 15:47:29 INFO] Loading data from ‘data/demo-train.t7’…
[05/17/17 15:51:11 INFO] * vocabulary size: source = 50004; target = 50004
[05/17/17 15:51:11 INFO] * additional features: source = 0; target = 0
[05/17/17 15:51:11 INFO] * maximum sequence length: source = 50; target = 51
[05/17/17 15:51:11 INFO] * number of training sentences: 6886328
[05/17/17 15:51:11 INFO] * number of batches: 107627
[05/17/17 15:51:11 INFO] - source sequence lengths: equal
[05/17/17 15:51:11 INFO] - maximum size: 64
[05/17/17 15:51:11 INFO] - average size: 63.98
[05/17/17 15:51:11 INFO] - capacity: 100.00%
[05/17/17 15:51:11 INFO] Loading checkpoint ‘model/model_epoch24_5.36.t7’…
[05/17/17 15:51:14 INFO] Resuming training from epoch 25 at iteration 1…
[05/17/17 15:51:16 INFO] Preparing memory optimization…
/usr/local/bin/luajit: /usr/local/share/lua/5.1/nn/MM.lua:22: input tensors must be 2D or 3D
stack traceback:
[C]: in function ‘assert’
/usr/local/share/lua/5.1/nn/MM.lua:22: in function ‘func’
/usr/local/share/lua/5.1/nngraph/gmodule.lua:345: in function ‘neteval’
/usr/local/share/lua/5.1/nngraph/gmodule.lua:380: in function ‘updateOutput’
./onmt/modules/Network.lua:16: in function ‘func’
/usr/local/share/lua/5.1/nngraph/gmodule.lua:345: in function ‘neteval’
/usr/local/share/lua/5.1/nngraph/gmodule.lua:380: in function ‘forward’
./onmt/modules/Decoder.lua:296: in function ‘forwardOne’
./onmt/modules/Decoder.lua:334: in function ‘forwardAndApply’
./onmt/modules/Decoder.lua:360: in function ‘forward’
./onmt/Seq2Seq.lua:207: in function ‘trainNetwork’
./onmt/utils/Memory.lua:40: in function ‘optimize’
./onmt/train/Trainer.lua:94: in function ‘__init’
/usr/local/share/lua/5.1/torch/init.lua:91: in function 'new’
train.lua:172: in function 'main’
train.lua:178: in main chunk
[C]: in function ‘dofile’
/usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406020

How to deal with the error?

Which OpenNMT version are you using and what options did you use for the initial training?