Applicable OpenNMT-tf release

Is the OpenNMT-tf pretrained model (averaged-ende-export500k) now trained with OpenNMT-tf V2?

No, it was trained with OpenNMT-tf v1.

On the other hand, https://s3.amazonaws.com/opennmt-models/averaged-ende-export500k-v2.tar.gz is a TensorFlow 2.0 export of “averaged-ende-export500k”.

And thanks for getting the docker of opennmt/tensorflow-serving 2.2.0-gpu up there so quickly. That’s much appreciated! Does this include the result of the three scripts given in master/examples/serving/tensorflow_serving/docker?

Yes. It adds a missing op in the official TensorFlow Serving images.

Sorry, again. I asked on Gitter but in case you didn’t see that - do you have a CURL request handy for testing. I have unsuccessfully tried to adapt the CURL I used for the V1 tensorflow model server on the basis of the TensorFlow tutorials.

The request should be the same. Could you describe what you tried so far?

I replaced “translate” with “predict” and tried curl -v -H “Content-Type: application/json” -X POST http://127.0.0.1:9000/models/tmodel:predict -d ‘{“src”:[{“text”: “I want to go to the moon.”}]}’ The command “docker ps” shows that opennmt/tensorflow-serving:2.0.0-gpu is running and listening on port 9000.

Ah, the syntax {"src":[{"text": "..."}]} is for nmtwizard/opennmt-tf only.

The Docker image opennmt/tensorflow-serving is just a plain TensorFlow Serving server (same as tensorflow/serving). It can be used like the serving example in the OpenNMT-tf repository.

Unfortunately what you want is not there yet. We still need to update nmtwizard/opennmt-tf to use TensorFlow v2 and OpenNMT-tf v2. If you rely on this project to serve your models, I suggest delaying a full transition to OpenNMT-tf v2.

OK, thanks. Just to get it clear in my mind. To avoid re-installing OpenNMT-tf V1 and introducing possible conflicts I can for the time being use the nmtwizard/opennmt-tf docker to train OpenNMT-tf v1 models? Most of my plug-ins and client apps are still geared to v1.

You can do that. It includes the latest OpenNMT-tf v1 release:

$ docker run -i --rm --entrypoint onmt-main nmtwizard/opennmt-tf --version
OpenNMT-tf 1.25.2

I now have docker opennmt/tensorflow-serving 2.2.0-gpu running and serving averaged-ende-export500k-v2 for test purposes.
On the following request the server sends back garbage, which I assume to be due to the lack of tokenization. Does anyone know off-hand the tokenization used in “wtmende” so that I can replicate it in my request.
#! /bin/ksh
curl -X POST -d
‘[ “inputs”: {
“tokens” : [[“Hello”, “world”, “!”, “”], [“How”, “are”, “you”, “?”]],
“length”: [3, 4]]’
http://localhost:9000/v1/models/ende:predict

Can you try with ["▁H", "ello", "▁world", "!"]? It’s a plain SentencePiece tokenization.

I modified my request as suggested and get this:
miguel@joshua:~$ ~/test_request.sh
@ @ ?
I will try with a small TF2 model of my own and see what happens then.

How did you launch the TensorFlow Serving instance? It seems you did not enable the REST endpoint. See:

https://www.tensorflow.org/tfx/serving/api_rest#start_modelserver_with_the_rest_api_endpoint

I’ve now added the rest_api and the server launch command is: sudo nvidia-docker run --rm -p 9000:9000 -v $PWD:/models
–name tensorflow_serving --entrypoint tensorflow_model_server
opennmt/tensorflow-serving:2.0.0-gpu
–port=9000 --rest_api_port=8501 \ --model_base_path=/models/ende --model_name=ende
The request still receives the same output as response. I’m puzzled…

You should replace 9000 by 8501 in the Docker -p option and in your request.

The TensorFlow model server is now talking back to me properly. Thanks.