Retraining fails from pretrained models

(Maria Sukhareva) #1

I downloaded a model from the pretrained torch models:

I try to load the parameters and train a new model on my data.
Unfortunately, both models through the same error. I also have models trained in opennmt-py but they seem to be not compatible with the Lua version.

th train.lua -train_from /wmt-ende-with-bt_l2-h1024-bpe32k_release/model-ende_epoch7_4.12_release.t7 -update_vocab replace -data opus_retrain-train.t7 -save_model opus_retrained

The error:

[07/24/18 15:01:04 INFO] Loading checkpoint ‘/wmt-ende-with-bt_l2-h1024-bpe32k_release/model-ende_epoch7_4.12_release.t7’…
/torch/install/bin/lua: ./onmt/train/Saver.lua:64: attempt to index field ‘info’ (a nil value)
stack traceback:
./onmt/train/Saver.lua:64: in function ‘loadCheckpoint’
train.lua:246: in function ‘loadModel’
train.lua:320: in function ‘main’
train.lua:338: in main chunk
[C]: in function ‘dofile’
…0896/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: in ?

Is there a way around it or is my only way to train those models myself?

(Guillaume Klein) #2


(Maria Sukhareva) #3

Thanks. So no way around but to train everything again…