I’m trying to use a model I generated with opennmt-TF, but I keep getting the same error what ever I do. So to ensure it wasn’t my custom python script that was causing the issue, I try to run in google colab the example from opennmt-tf GitHub.
I’m still getting the exact same issue, except that the anonymousVar number is different.
tensorflow.python.framework.errors_impl.FailedPreconditionError: Could not find variable _AnonymousVar5.
Full error:
2021-06-24 22:28:57.032744: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-06-24 22:28:58.738877: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-06-24 22:28:58.809708: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-06-24 22:28:58.809783: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (818f008a693b): /proc/driver/nvidia/version does not exist
2021-06-24 22:29:01.159847: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
2021-06-24 22:29:01.168954: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2299995000 Hz
Source: Test
Traceback (most recent call last):
File “ende_client.py”, line 64, in
main()
File “ende_client.py”, line 58, in main
output = translator.translate([text])
File “ende_client.py”, line 18, in translate
outputs = self._translate_fn(**inputs)
File “/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py”, line 1711, in call
return self._call_impl(args, kwargs)
File “/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py”, line 1729, in _call_impl
return self._call_with_flat_signature(args, kwargs, cancellation_manager)
File “/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py”, line 1778, in _call_with_flat_signature
return self._call_flat(args, self.captured_inputs, cancellation_manager)
File “/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/load.py”, line 118, in _call_flat
cancellation_manager)
File “/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py”, line 1961, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File “/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py”, line 596, in call
ctx=ctx)
File “/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py”, line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Could not find variable _AnonymousVar5. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status=Not found: Resource localhost/_AnonymousVar5/N10tensorflow3VarE does not exist.
[[{{node StatefulPartitionedCall/transformer_base_1/self_attention_encoder_1/self_attention_encoder_layer_6/transformer_layer_wrapper_30/multi_head_attention_18/dense_96/BiasAdd/ReadVariableOp}}]] [Op:__inference_signature_wrapper_7692]
Maybe something went wrong when you changed the TensorFlow version, but this is not related to the fix. The fix did not change how the TensorFlow API is used.
I confirm that the tutorial is now working great for me. And as for my script i’m getting some other issue, but this was expected as I wasn’t done writing it!
Hi Samuel, As you seem to be working on similar lines to me I thought I would share my LocalTranslator class which works in a Prediction Manager module I am deploying with TF 2.5. I only use SentencePiece for tokenizing as I am working on Windows 10 for which pyonmttok is not available. I am not a pythonista but it may be useful to some
class LocalTranslator(object):
def __init__(self, model_path, sp_model):
export_dir = model_path
sp_model = sp_model
self._imported = tf.saved_model.load(export_dir)
self._translate_fn = self._imported.signatures["serving_default"]
# create SentencePiece object
self._sp = spm.SentencePieceProcessor(model_file=sp_model)
def translate(self, texts):
#Translates a batch of texts.
inputs = self._preprocess(texts)
outputs = self._translate_fn(**inputs)
return self._postprocess(outputs)
def _preprocess(self, texts):
all_tokens = []
lengths = []
max_length = 0
for text in texts:
tokens = self._sp.encode(text, out_type=str)
length = len(tokens)
all_tokens.append(tokens)
lengths.append(length)
max_length = max(max_length, length)
for tokens, length in zip(all_tokens, lengths):
if length < max_length:
tokens += [""] * (max_length - length)
inputs = {
"tokens": tf.constant(all_tokens, dtype=tf.string),
"length": tf.constant(lengths, dtype=tf.int32)}
return inputs
def _postprocess(self, outputs):
texts = []
for tokens, length in zip(outputs["tokens"].numpy(), outputs["length"].numpy()):
tokens = tokens[0][:length[0]].tolist()
detokenized = self._sp.decode(tokens)
texts.append(detokenized)
texts = [x.decode('utf-8') for x in texts]
translation = "".join(texts)
return translation
Indeed, I’m pretty much working on the same thing and for the same reasons. I ended up with a code that is extremely similar to yours, except that I wasn’t done structuring it properly, as I’m not a pythonista too!
The only additional thing I did, is that my class ingest a File rater than a text input. It also generate a file as an output.
The one thing I still need to figure out is how to get to predict score and adjust the number of results (beam search)
Hi Samuel, My work also involves file reading & writing as well as grabbing input from a GUI. The Tagalog<>English translator I’m releasing shortly has Python modules for handling Word, PowerPoint, Excel & PDF files. My main focus has been on providing solutions without the need for Internet or other network connectivity!
I will give you the piece of code I have done, once I’m done with it.
I’m not working full time on this. I’m volunteering for a nonprofit organization. They translate text from English to many languages (over 70) and more or less15 of them don’t have any MT available on the internet. So i’m building a pipeline to tokenize/normalize and then create the MT for each one of them.
Next step will be to generate MT for those which there are MT out there, but our content is really specific and based on me preliminary analysis, I can beat DeepL and etc. for 60% of our content. (I use the predict score (+ penalty for sentence length) to determine when to keep from external MT vs the one I created)
the biggest difference is that i’m not using the decoder from sentencepiece. In my case the Source and Target model are not the same and this save me the trouble of having to pass it as an argument.
import argparse
import os
import tensorflow as tf
import sentencepiece as spm
translator = openNMT_TF_Translator(args.export_dir, args.source_sp_model)
#read file to translate
with open(args.file_to_translate, encoding='utf-8') as f:
text = f.read().splitlines()
#translate the text
#print(text)
output = translator.translate(text)
#print(output)
#saved the results
with open(args.outputFile, 'w') as wf:
for line in output:
wf.write(line + '\n')
This is very similar to my script except I have moved the file reading/writing into a separate module as I am providing operations to read/write docx, xlsx, ppt and pdf files besides text files as each format requires slightly different code. It means translation takes slightly longer but speed is not my concern here.
I was curious if you spent some time analyzing the benefit:
of comparing the number of from best predict score vs the ones before? I noticed that sometime has better predict score than the real token… So using the token[1] would be better than the token[0]
When using BPE or Unigram, since words are segmented, you can get multiple subsequent , yet they are part of the same word and not multiple word unknown. It’s get confusing as it seem there are many word missing, yet it’s just the same word broken down into pieces. I was curious if you did any kind of handling for this?
If not, I will eventually tackle these and post my code here… if it can help.
Traceback (most recent call last):
File "/usr/local/bin/onmt-main", line 33, in <module>
sys.exit(load_entry_point('OpenNMT-tf==2.26.1', 'console_scripts', 'onmt-main')())
File "/usr/local/bin/onmt-main", line 25, in importlib_load_entry_point
return next(matches).load()
File "/usr/local/lib/python3.7/dist-packages/importlib_metadata/__init__.py", line 203, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'opennmt'