OpenNMT Forum

WARNING:tensorflow:A checkpoint was restored but not all checkpointed values were used

Hello,

I have noticed a warning at the end of the training log file, each time I train a new model:

INFO:tensorflow:Restored checkpoint path/ckpt-20000
INFO:tensorflow:Averaging 6 checkpoints…
INFO:tensorflow:Reading checkpoint path/ckpt-15000…
INFO:tensorflow:Reading checkpoint path/ckpt-16000…
INFO:tensorflow:Reading checkpoint path/ckpt-17000…
INFO:tensorflow:Reading checkpoint path/ckpt-18000…
INFO:tensorflow:Reading checkpoint path/ckpt-19000…
INFO:tensorflow:Reading checkpoint path/ckpt-20000…
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.loss_scale
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.base_optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.loss_scale.current_loss_scale
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.loss_scale.good_steps
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(…).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.

Do you possibly know the cause of the above warning and the way it could be solved? Does it affect the model after all?

Thank you

Hi,

I don’t think this could be a problem. Do you have issues when using the trained model afterwards?