OpenNMT Forum

Fine-Tuning going in to a infinite loop

I have built a generic baseline model and I’m trying to fine-tune my model to a specific domain. I have used the train_from option to load my model. I see my training going into an infinite loop.

“[2020-12-11 15:25:41,054 INFO] Loading ParallelCorpus(bpe.nmine.en, bpe.nmine.hi, align=None)…”. This message keeps on repeating in the log.

Any help appreciated.



Your dataset is probably relatively small, and loaded very often.

This could probably be optimised, but not top priority since the kind of models trained with the framewok need relatively big datasets anyways.

Dear Prashanth,

It is okay; as far as I can tell, it does not affect the quality in any way.

Kind regards,

Doesn’t it dilute the purpose of naming “on-the-fly”? The author in the paper “” states “using only the top-1 retrieved pair for updating the model”.I wanted to try it using OpenNMT but I’m encountering the data-size issue.@francoishernandez is there a way we can do this?


As stated earlier, this data loading pipeline is mainly aimed at big datasets, not very small ones. It’s working though. Just not optimised and a bit too verbose.

As for your second question, what exactly is the need ? At first glance it looks a bit like this:

I was trying to load around 500 sentences, I see there is too much verbose. Is the sentencewise weights option included in OpenNMT? , I don’t see any documentation for it. It would be helpful if you can direct me to the documentation if any.

The link I posted is an open PR. It’s not in the main repo. You may pull the branch if you want to try.

You won’t go very far with 500 sentences. Even if you’re only finetuning, you would encounter catastrophic forgetting very very fast.

Hi @francoishernandez,
I don’t see the -sentence_weights option in the docs. Could you help me with its usage?


This means it’s not in the released version of OpenNMT-py, hence it’s not in the docs either.
Also, this was based on the legacy version of OpenNMT-py and is not compatible with 2.0.
Still, it should work standalone if you want to try it.
The use of the option is explained in the PR.

I introduce the -sentence_weights opt, to which we are supposed to pass some text file(s) containing the weights for each sentence / example. If several corpora are passed according to #1413 upgrades, such weight files should be passed as well. If we want/have weights for only some of the corpora in the list, we can pass None/none instead of the filename and it will be cast to python None by argparse, and weights of 1 will be assigned.