I was successful in deploying my tf model using docker and tensorflow serving.
But I want to know if their is a possible way that I can generate a prediction with a python script that loads a model “.pb” file and takes the input and gives back the output.
I don’t want to run multiple servers on my system , that was i was thinking of this…
does anyone have any idea how this can be done? anyone who has done this before?
// tf2_client is ende_client.py renamed
from tf2_client import EnDeTranslator
etc etc etc
//
export_dir = “/home/miguel/nmtgateway/eng2turk_model/1”
// the EnDeTranslator class contains all the functions needed to submit a request & handle a response
translator = EnDeTranslator(export_dir)
I have no idea whether it would work for the multi-feature model, you would probably need to play around with it.
while exporting the model manually , there was no “assets.extra” folder which was supposed to contain the model file.
what should i pass simply the folders path that contains “saved_model.pb” file!
Here’s my directory structure for the model:
miguel@joshua:~/tf_experiments/eng2turk_tf/serving/eng2turk_model$ ls
1 config.json -->
miguel@joshua:~/tf_experiments/eng2turk_tf/serving/eng2turk_model/1$ ls
assets assets.extra saved_model.pb variables
The assets.extra directory does not seem to be created automatically. I always create it myself. It contains my SentencePiece model:
miguel@joshua:~/tf_experiments/eng2turk_tf/serving/eng2turk_model/1/assets.extra$ ls
turkeng.model
No, for the time being I’m keeping the complete code private. The key elements are in the ende_client.py code provided on github. It’s not too difficult to adapt that code for your own purposes.
How can I generate my assets.extra folder. and how can I use SentencePiece with multi features .
& Is it necessary to use SentencePiece if we are already using the tokenizer from OpenNMT.
I just create it manually and copy my SentencePiece model file into it. I haven’t used multi features but I do know that when you use SentencePiece everything has to be “sentence-pieced”. However, it’s NOT necessary to use SentencePiece, it’s my personal preference.
All my TensorFlow work has been done with SentencePiece. Obviously if you don’t use SentencePiece you don’t need to pass an SP model. But I’m not sure if assets.extra is needed if you just use the OpenNMT tokenizer.