TLIte Converter

I have managed to get OpenNMT-tf running on various Windows 10 setups without WSL and without a network connection. The big issue with consumer adoption is the “bloat” of the TensorFlow toolkit which gives an unacceptable start-up delay. I have been thinking about the possibility of doing inference via TensorFlow lite. However, with various releases of TensorFlow 2.3 - .2.5 I have failed to convert a SavedModel with the provided script. I make most progress with TF 2.3 with the fTraceback below. I would be keen to hear if anyone has succeeded converting a SavedModel for TLIte nference.:


Traceback (most recent call last):
File “./tlite_converter.py”, line 8, in
tflite_model = converter.convert()
File “/home/miguel/testenv/lib/python3.5/site-packages/tensorflow/lite/python/lite.py”, line 1076, in convert
return super(TFLiteConverterV2, self).convert()
File “/home/miguel/testenv/lib/python3.5/site-packages/tensorflow/lite/python/lite.py”, line 878, in convert
self._funcs[0], lower_control_flow=False))
File “/home/miguel/testenv/lib/python3.5/site-packages/tensorflow/python/framework/convert_to_constants.py”, line 1103, in convert_variables_to_constants_v2_as_graph
aggressive_inlining=aggressive_inlining)
File “/home/miguel/testenv/lib/python3.5/site-packages/tensorflow/python/framework/convert_to_constants.py”, line 804, in init
self._build_tensor_data()
File “/home/miguel/testenv/lib/python3.5/site-packages/tensorflow/python/framework/convert_to_constants.py”, line 823, in _build_tensor_data
data = val_tensor.numpy()
File “/home/miguel/testenv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py”, line 1063, in numpy
maybe_arr = self._numpy() # pylint: disable=protected-access
File “/home/miguel/testenv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py”, line 1031, in _numpy
six.raise_from(core._status_to_exception(e.code, e.message), None) # pylint: disable=protected-access
File “”, line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.


My script for use with TF 2.3 is:

import tensorflow as tf
import tensorflow_addons as tfa
tfa.register_all()
saved_model_dir = input("Enter path of saved model: ")

Convert the model

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory
tflite_model = converter.convert()

Save the model.

with open(‘model.tflite’, ‘wb’) as f:
f.write(tflite_model)

I think it would be easier and more flexible to compile CTranslate2 on Windows and ship it in your application:

You will have a hard time converting the models to TF Lite.

1 Like

That’s what I suspected. I’m very pleased with performance of CTranslate2 on WSL. I will set about compilation for “native” Windows. [Aside: once WSLg becomes part of “standard” Windows this will be less of an issue.]

1 Like

If anyone has tried this, would it be possible to share some pointers? Running CT2 engines locally would be fantastic for some kind of Windows users.

1 Like

Hi there,

It should compile just fine in Windows. I suggest you use this in the CMake file to save yourself a ton of headaches:

set(CMAKE_CXX_FLAGS "-DNOMINMAX")
3 Likes