ZeroDivisionError during training

Hi,
I want to fine-tune an OpenNMT-tf model with a small corpus with tens of sentences. I just fine-tune the model for 6 steps and hope to see the training summary during training, so I set save_summary_steps to 2.
During training, an error occurs, which is as follows:

2022-05-25 17:41:48.203000: I runner.py:281] Number of model parameters: 274700545
2022-05-25 17:41:48.219000: I runner.py:281] Number of model weights: 260 (trainable = 260, non trainable = 0)
Traceback (most recent call last):
File “d:\python\python38\lib\runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “d:\python\python38\lib\runpy.py”, line 87, in run_code
exec(code, run_globals)
File "D:\Virtualenv\tf26gpu\Scripts\onmt-main.exe_main
.py", line 7, in
File “D:\Virtualenv\tf26gpu\lib\site-packages\opennmt\bin\main.py”, line 318, in main
runner.train(
File “D:\Virtualenv\tf26gpu\lib\site-packages\opennmt\runner.py”, line 281, in train
summary = trainer(
File “D:\Virtualenv\tf26gpu\lib\site-packages\opennmt\training.py”, line 128, in call
self._training_stats.log(self.is_master)
File “D:\Virtualenv\tf26gpu\lib\site-packages\opennmt\training.py”, line 609, in log
summary = self.get_last_summary()
File “D:\Virtualenv\tf26gpu\lib\site-packages\opennmt\training.py”, line 579, in get_last_summary
“steps_per_sec”: (self._last_step - self._last_logged_step) / elapsed_time,
ZeroDivisionError: float division by zero

Maybe the training speed is very fast and elapsed_time is zero.
Is there any solution for this error?
Thanks!

Hi,

I’m updating the code so that the elapsed_time is at least 1 millisecond. This should cover the edge case you encountered.

1 Like

Thank you!