How to build the docker image with TensorFlow serving environment and the trained model?

I use the TensorFlow Serving GPU to run the model trained with OpenNMT-tf V2, and here is my command:

docker run -t --rm -p 8501:8501 -v $PWD:/models \
--name tensorflow_serving \
--entrypoint tensorflow_model_server opennmt/tensorflow-serving:2.0.0-gpu \
--enable_batching=true --batching_parameters_file=/models/batching_config.txt \
--rest_api_port=8501 --model_base_path=/models/run/export/model --model_name=my_model

For serving in production, I need to save my model and the model server to a docker image. I have tried to directly export the docker container which built by the command as mentioned above to a file, and imported it to a docker image, but the image couldn’t run successfully.
Did I miss something? What should I do to successfully build the docker image to serve my model ?

This is more a question about Docker than OpenNMT-tf.

You probably want to create a Dockerfile that:

  1. starts from opennmt/tensorflow-serving
  2. copies the model into the image
  3. defines an entrypoint which starts the server

@guillaumekln Thank you for your kind reply. The given suggestion help me solve the problem!