incremental trainong could be achieved through the -continue option I guess.
Here is a conversation on gitter:
Guys, reading this from the doc: "If training from a checkpoint, whether to continue the training in the same configuration or not." How different can the configuration be in continue mode ?
not the place, sorry
You use -continue when you stopped a training a want to resume it on the last checkpoint. Without -continue it is a new training using the parameters of the checkpoint.
my was more about: I presume you need to keep the shape of the rnn but can we for instance change the dropout ratio for a "resumed" training ?
You might for instance, want to change the learning rate or the printing criterion
you currently can't change the topology of the model at all no
it's not clear how your would map the old parameters to new for instance
changing dropout is currently not supported, if there was a use case that could be done in theory
(mind posting to forum though with answers? think other people will have this issue)