I’d like to refine (retrain) the 2-layer LSTM with copy attention for summarization in OpenNMT-py. I’m unclear if I have to start training over. I’ve seen this, http://opennmt.net/OpenNMT/training/retraining/ but is that only for Lua? I’m new to PyTorch. I’ve downloaded the model, gigaword_copy_acc_51.78_ppl_11.71_e20.pt, how do you get the vocabulary out to do your model training? I don’t need to change the model architecture or the vocabulary; I’m just not sure about the steps. I don’t have easy access to a GPU.
Thanks so much,