Ctranslate2 tensorflow convering error

Hello.
I try to convert the opennmt tensforflow model to ctranslate2 model.
However the below error is occur.
How to solve the problem?
Thanks.

@guillaumekln

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/ctranslate2/converters/opennmt_tf.py:70: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
Traceback (most recent call last):
File “/usr/local/bin/ct2-opennmt-tf-converter”, line 8, in
sys.exit(main())
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/bin/opennmt_tf_converter.py”, line 19, in main
tgt_vocab=args.tgt_vocab).convert_from_args(args)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/converter.py”, line 40, in convert_from_args
force=args.force)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/converter.py”, line 56, in convert
src_vocab, tgt_vocab = self._load(model_spec)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/opennmt_tf.py”, line 117, in _load
tgt_vocab=self._tgt_vocab)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/opennmt_tf.py”, line 70, in load_model
meta_graph = tf.compat.v1.saved_model.loader.load(sess, [“serve”], model_path)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py”, line 324, in new_func
return func(*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py”, line 269, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py”, line 422, in load
**saver_kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py”, line 352, in load_graph
meta_graph_def, import_scope=import_scope, **saver_kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/saver.py”, line 1477, in _import_meta_graph_with_return_elements
**kwargs))
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/meta_graph.py”, line 809, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/importer.py”, line 405, in import_graph_def
producer_op_list=producer_op_list)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/importer.py”, line 501, in _import_graph_def_internal
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered ‘Addons>GatherTree’ in binary running on 35533ddd7f06. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

This should fix the error:

I test this error in colab. However still an error was found.

Command:

!pip install ctranslate2

!wget https://s3.amazonaws.com/opennmt-models/averaged-ende-export500k-v2.tar.gz

!tar xf averaged-ende-export500k-v2.tar.gz

!ct2-opennmt-tf-converter --model_path averaged-ende-export500k-v2 --model_spec TransformerBase
–output_dir ende_ctranslate2

Error:

2020-03-02 02:56:35.677982: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300000000 Hz
2020-03-02 02:56:35.679782: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2779100 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-03-02 02:56:35.679817: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-03-02 02:56:35.685975: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-03-02 02:56:35.872233: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-02 02:56:35.873045: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x27792c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-03-02 02:56:35.873074: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0
2020-03-02 02:56:35.873975: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-02 02:56:35.873995: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/ctranslate2/converters/opennmt_tf.py:80: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
Traceback (most recent call last):
File “/usr/local/bin/ct2-opennmt-tf-converter”, line 8, in
sys.exit(main())
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/bin/opennmt_tf_converter.py”, line 19, in main
tgt_vocab=args.tgt_vocab).convert_from_args(args)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/converter.py”, line 40, in convert_from_args
force=args.force)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/converter.py”, line 52, in convert
src_vocab, tgt_vocab = self._load(model_spec)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/opennmt_tf.py”, line 127, in _load
tgt_vocab=self._tgt_vocab)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/opennmt_tf.py”, line 80, in load_model
meta_graph = tf.compat.v1.saved_model.loader.load(sess, [“serve”], model_path)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py”, line 324, in new_func
return func(*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py”, line 269, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py”, line 422, in load
**saver_kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py”, line 352, in load_graph
meta_graph_def, import_scope=import_scope, **saver_kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/saver.py”, line 1477, in _import_meta_graph_with_return_elements
**kwargs))
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/meta_graph.py”, line 809, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/importer.py”, line 405, in import_graph_def
producer_op_list=producer_op_list)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/importer.py”, line 501, in _import_graph_def_internal
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered ‘Addons>GatherTree’ in binary running on 6d4d48e42d81. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

Looks like you have TensorFlow 1.x installed, but you need TensorFlow 2.x to convert a V2 SavedModel.

Check the quickstart in the readme again. In particular, you should install OpenNMT-tf with its dependencies:

pip install OpenNMT-tf

Thank you for your advice.
I install tf V2 but the another error is occur.

2020-03-02 16:18:33.526489: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library ‘libnvinfer.so.6’; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2020-03-02 16:18:33.526591: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library ‘libnvinfer_plugin.so.6’; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2020-03-02 16:18:33.526609: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2020-03-02 16:18:34.646816: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Traceback (most recent call last):
File “/usr/local/bin/ct2-opennmt-tf-converter”, line 8, in
sys.exit(main())
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/bin/opennmt_tf_converter.py”, line 19, in main
tgt_vocab=args.tgt_vocab).convert_from_args(args)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/converter.py”, line 40, in convert_from_args
force=args.force)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/converter.py”, line 52, in convert
src_vocab, tgt_vocab = self._load(model_spec)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/opennmt_tf.py”, line 127, in _load
tgt_vocab=self._tgt_vocab)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/opennmt_tf.py”, line 67, in load_model
src_vocab = _get_asset_path(imported.examples_inputter.features_inputter)
File “/usr/local/lib/python3.6/dist-packages/ctranslate2/converters/opennmt_tf.py”, line 51, in _get_asset_path
asset = getattr(lookup_table._initializer, “_filename”, None)
AttributeError: ‘_RestoredResource’ object has no attribute ‘_initializer’
WARNING:tensorflow:Unresolved object in checkpoint: (root).examples_inputter.features_inputter.ids_to_tokens._initializer
WARNING:tensorflow:Unresolved object in checkpoint: (root).examples_inputter.labels_inputter.ids_to_tokens._initializer
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(…).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.

Hi,

I think this is due to older versions of CUDA/CUDNN. With TensorFlow 2, you need CUDA 10.1 and CUDNN 7.

I pushed a new version of the model “averaged-ende-export500k-v2”. There should no longer be an error after downloading it again.