OpenNMT Forum

Does continuing training start from unseen/less-seen data?

If I’m correct, training data is shuffled in both onmt-py and onmt-tf. My question concerns further training: assume that you are training a model but for some reason you have to stop training or the training process crashes. Onmt provides the option to continue training from the last checkpoint, which is nice. However, ideally, this would mean that it continues training with data it had not seen before (or at least which it had not seen as frequently as other items). My question is whether this is the case, as currently implemented? If not, that may give unexpected results (e.g. having seen some samples twice and others never). To be fair, I am not sure whether this is even possible or hard to implement. I am interested in both py and tf versions.

Hey there,
For reference, a similar question has already been discussed a bit for -py here: "Train from" - choosing preprocess chunk

To make this transparent, we could probably store some dict with {<dataset_name>: <current_shard_number>} in the checkpoint. We’d gladly accept a PR for such feature if you feel like diving into it.

1 Like