Learning rate Decay

(Arun) #1

I have trained my parallel corpus using the train.lua and I end up getting epoch13.57.23.t7 and all other epoch files saved in a folder but I accidentally closed the terminal so I didn’t get to see the learning rate decay. Is there any way that I could generate all the learning rate that was used for these epoch files?

(Guillaume Klein) #2


There are no straightforward ways to do this but it is possible to retrieve the values from the checkpoint file, see for example:

but I will not assist you further on this. Next time redirect the logs to a file on disk.