Hi, I have an OpenNMT-tf trained model in saved format. I am trying to load the model using this code.
I stuck with this error -
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered ‘Addons>GatherTree’ in binary running on 13170b6b4a09. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
I have also tried to load the model using this, but got same error.
I have tried this. But got this error-
Traceback (most recent call last):
File “ende_client.py”, line 67, in
main()
File “ende_client.py”, line 57, in main
translator = EnDeTranslator(args.export_dir)
File “ende_client.py”, line 16, in init
self._tokenizer = pyonmttok.Tokenizer(“none”, sp_model_path=sp_model_path)
ValueError: Unable to open SentencePiece model /home/nehasoni/MT/MT__tf_05012021/zhen_tf_saved_model/assets.extra/wmtende.model
You are using a model different from the one downloaded in the example. So you should adapt the Python code, in particular to define another tokenization function.
Hi @guillaumekln, Just to make double sure as this is an issue relevant to what I’m currently doing: if I re-export a “older” opennmt-tf model with the latest OpenNMT-tf version I no longer need import tensorflow_addons as tfa
tfa.register_all(). Although models are loaded and allow inference with this import it is causing problems with PyInstaller.
Edit: I have concluded that ctranslate2 will deliver what I need with less complexity. Please disregard my query unless others show interest.
Yes, it definitely is. I even get blistering speed as well good translation performance running under WSLg on my tiny Asus Zenbook. Exporting with “–export_format ctranslate2” is even easier than running the conversion tool