OpenNMT

Load OpenNMT-tf saved model

Hi, I have an OpenNMT-tf trained model in saved format. I am trying to load the model using this code.
I stuck with this error -

tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered ‘Addons>GatherTree’ in binary running on 13170b6b4a09. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

I have also tried to load the model using this, but got same error.

How do I load the model and check its summary.

Hi,

2 possible solutions (choose only one):

  1. Re-export the trained model using the latest OpenNMT-tf version

  2. Add the following lines after the TensorFlow import at the top of your Python code:

import tensorflow_addons as tfa
tfa.register_all()

Yes, it works!!
Thank You.
But the loaded model not shows its summary.
AttributeError: ‘TransformerBase’ object has no attribute ‘summary’

What code are you trying to run exactly?

summary is not a method of model instances: opennmt.models.Model — OpenNMT-tf 2.14.0 documentation

I have used
model = tf.keras.models.load_model(saved_model_path).
Yes, summary() is not OpenNMT model instance but it is TF method to see layers.

OpenNMT-tf models are not compatible with tf.keras.models.load_model.

Why? My model is trained using OpenNMT-tf with TFv2.2.0

OpenNMT-tf models are not instances of tf.keras.Model so they can’t be loaded with this function.

The example you linked in the first post shows how to load and run a SavedModel exported from OpenNMT-tf.

Okay.
But If I convert this model into TFlite. Will you help me to understand this error.

tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.

Also, using Ctranslate2, I am able to do inference on OpenNMT-tf model. How do I do inference using OpenNMT-tf model only.

Again, this is the example to load and run a SavedModel: OpenNMT-tf/examples/serving/python at master · OpenNMT/OpenNMT-tf · GitHub

What else are you trying to do?

I have tried this. But got this error-
Traceback (most recent call last):
File “ende_client.py”, line 67, in
main()
File “ende_client.py”, line 57, in main
translator = EnDeTranslator(args.export_dir)
File “ende_client.py”, line 16, in init
self._tokenizer = pyonmttok.Tokenizer(“none”, sp_model_path=sp_model_path)
ValueError: Unable to open SentencePiece model /home/nehasoni/MT/MT__tf_05012021/zhen_tf_saved_model/assets.extra/wmtende.model

You are using a model different from the one downloaded in the example. So you should adapt the Python code, in particular to define another tokenization function.

The inference code is basically the following:

import tensorflow as tf
imported = tf.saved_model.load(export_dir)
translate_fn = imported.signatures["serving_default"]
outputs = translate_fn(tokens=..., length=...)

Hi @guillaumekln, Just to make double sure as this is an issue relevant to what I’m currently doing: if I re-export a “older” opennmt-tf model with the latest OpenNMT-tf version I no longer need import tensorflow_addons as tfa
tfa.register_all(). Although models are loaded and allow inference with this import it is causing problems with PyInstaller.
Edit: I have concluded that ctranslate2 will deliver what I need with less complexity. Please disregard my query unless others show interest.

Yes, exactly.

Yes. It should generally be easier to use CTranslate2. Performance should be much improved.

Yes, it definitely is. I even get blistering speed as well good translation performance running under WSLg on my tiny Asus Zenbook. Exporting with “–export_format ctranslate2” is even easier than running the conversion tool :slight_smile:

1 Like