Predict Using Tensorflow Model without using tensorflow-serving

Hello Fellow Researchers,

Greetings for the day!

I was successful in deploying my tf model using docker and tensorflow serving.

But I want to know if their is a possible way that I can generate a prediction with a python script that loads a model “.pb” file and takes the input and gives back the output.

I don’t want to run multiple servers on my system , that was i was thinking of this…
does anyone have any idea how this can be done? anyone who has done this before?

I already have a fastapi server , I just need to know a way to load a model in a python script and that can take the input and generate prediction.

Hi, I have done this by adapting ende_client.py found at https://github.com/OpenNMT/OpenNMT-tf/blob/master/examples/serving/tensorflow_serving/ende_client.py. I have wrapped this in a flask server but there is no reason why it can’t be wrapped in a simple Python GUI. The English-Turkish model at www.nmtgateway.com is served in this way.

@tel34 Did you mean to link to this example https://github.com/OpenNMT/OpenNMT-tf/tree/master/examples/serving/python instead?

what will be the directory structure for the same .
and will this method work for multi-feature model.

Yes, sorry. Wrong link!

// tf2_client is ende_client.py renamed
from tf2_client import EnDeTranslator
etc etc etc
//
export_dir = “/home/miguel/nmtgateway/eng2turk_model/1”
// the EnDeTranslator class contains all the functions needed to submit a request & handle a response
translator = EnDeTranslator(export_dir)

I have no idea whether it would work for the multi-feature model, you would probably need to play around with it.

while exporting the model manually , there was no “assets.extra” folder which was supposed to contain the model file.
what should i pass simply the folders path that contains “saved_model.pb” file!

  def __init__(self, export_dir):
    imported = tf.saved_model.load(export_dir)
    self._translate_fn = imported.signatures["serving_default"]
    sp_model_path = os.path.join(export_dir, "assets.extra", "wmtende.model")
    self._tokenizer = pyonmttok.Tokenizer("none", sp_model_path=sp_model_path)

Here’s my directory structure for the model:
miguel@joshua:~/tf_experiments/eng2turk_tf/serving/eng2turk_model$ ls
1 config.json -->
miguel@joshua:~/tf_experiments/eng2turk_tf/serving/eng2turk_model/1$ ls
assets assets.extra saved_model.pb variables
The assets.extra directory does not seem to be created automatically. I always create it myself. It contains my SentencePiece model:
miguel@joshua:~/tf_experiments/eng2turk_tf/serving/eng2turk_model/1/assets.extra$ ls
turkeng.model

@tel34, have you uploaded this code on github?

No, for the time being I’m keeping the complete code private. The key elements are in the ende_client.py code provided on github. It’s not too difficult to adapt that code for your own purposes.

How can I generate my assets.extra folder. and how can I use SentencePiece with multi features .
& Is it necessary to use SentencePiece if we are already using the tokenizer from OpenNMT.

I just create it manually and copy my SentencePiece model file into it. I haven’t used multi features but I do know that when you use SentencePiece everything has to be “sentence-pieced”. However, it’s NOT necessary to use SentencePiece, it’s my personal preference.

So I dont have to pass the SentencePiece model from assets.extra , or is it necessary

All my TensorFlow work has been done with SentencePiece. Obviously if you don’t use SentencePiece you don’t need to pass an SP model. But I’m not sure if assets.extra is needed if you just use the OpenNMT tokenizer.

I solved this and posted the solution at the last of this post