How to use serving on OpenNMT-tf V2


I built a model using OpenNMT-tf V2 and tried to run an exported model with Tensorflow Serving. I used the command which could successfully serve model with previous version of OpenNMT-tf by CPU before, but it failed now.

tensorflow_model_server --rest_api_port=9004 --model_name=my_model --model_base_path=/run/best

According to this issue, I changed to use opennmt/tensorflow-serving:2.0.0-gpu instead of offical TensorFlow Serving.

Here is the command:

docker run -t --rm -p 9004:9004 -v $PWD:/models \
--name tensorflow_serving --entrypoint tensorflow_model_server \
opennmt/tensorflow-serving:2.0.0-gpu \
--enable_batching=true --batching_parameters_file=/models/batching_config.txt \
--port=9004 --model_base_path=/models/run/best --model_name=my_model

And here is the log:

2019-12-26 08:36:33.602414: I tensorflow_serving/model_servers/] Building single TensorFlow model file config:  model_name: my_model model_base_path: /models/run/best
2019-12-26 08:36:33.603534: I tensorflow_serving/model_servers/] Adding/updating models.
2019-12-26 08:36:33.603582: I tensorflow_serving/model_servers/]  (Re-)adding model: my_model
2019-12-26 08:36:33.704264: I tensorflow_serving/core/] Successfully reserved resources to load servable {name: my_model version: 500}
2019-12-26 08:36:33.704346: I tensorflow_serving/core/] Approving load for servable version {name: my_model version: 500}
2019-12-26 08:36:33.704391: I tensorflow_serving/core/] Loading servable version {name: my_model version: 500}
2019-12-26 08:36:33.704435: I external/org_tensorflow/tensorflow/cc/saved_model/] Reading SavedModel from: /models/run/best/500
2019-12-26 08:36:33.764514: I external/org_tensorflow/tensorflow/cc/saved_model/] Reading meta graph with tags { serve }
2019-12-26 08:36:33.844227: I external/org_tensorflow/tensorflow/core/platform/] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-26 08:36:33.845301: W external/org_tensorflow/tensorflow/stream_executor/platform/default/] Could not load dynamic library ''; dlerror: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2019-12-26 08:36:33.845323: E external/org_tensorflow/tensorflow/stream_executor/cuda/] failed call to cuInit: UNKNOWN ERROR (303)
2019-12-26 08:36:33.845354: I external/org_tensorflow/tensorflow/stream_executor/cuda/] no NVIDIA GPU device is present: /dev/nvidia0 does not exist
2019-12-26 08:36:33.973097: I external/org_tensorflow/tensorflow/cc/saved_model/] Restoring SavedModel bundle.
2019-12-26 08:36:34.947318: I external/org_tensorflow/tensorflow/cc/saved_model/] Running initialization op on SavedModel bundle at path: /models/run/best/500
2019-12-26 08:36:35.381295: I external/org_tensorflow/tensorflow/cc/saved_model/] SavedModel load for tags { serve }; Status: success. Took 1676844 microseconds.
2019-12-26 08:36:35.395127: I tensorflow_serving/servables/tensorflow/] Wrapping session to perform batch processing
2019-12-26 08:36:35.395179: I tensorflow_serving/servables/tensorflow/] Wrapping session to perform batch processing
2019-12-26 08:36:35.396086: I tensorflow_serving/servables/tensorflow/] No warmup data file found at /models/run/best/500/assets.extra/tf_serving_warmup_requests
2019-12-26 08:36:35.402262: I tensorflow_serving/core/] Successfully loaded servable version {name: my_model version: 500}
2019-12-26 08:36:35.410654: I tensorflow_serving/model_servers/] Running gRPC ModelServer at ...

Then I made a request:

curl -d '{"inputs": {"tokens":[["shān", "lù", "zhòng", "bìng", "yǒu", "yí", "duàn", "yá"]], "length":[8]}}' \

I only got the following message:

Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.

I added --output - to the command, but the output was nothing.

Did I miss something? What should I do to successfully serve my model with OpenNMT-tf V2?

1 Like


You should set --rest_api_port instead of --port on the serving command line.

It works! Thank you vey much! :slightly_smiling_face: