Nmt-wizard-docker for OpenNMT-tf V2

Hi Terence,

Thanks, that’s useful! How about tokenization and other options for each model? Are they defined in that same config under each config section or are they included in the json config file in each model’s directory?

Hi Panos,
In my experimental set-up I have retained the tokenization info in the json config file. However, the input is tokenized and the output detokenized on the client side as seen from the following code snippet:
batch_input = [tokenizer.tokenize(text)[0] for text in batch_text]
future = send_request(stub, model_name, batch_input, timeout=timeout)
result = future.result()
batch_output = [tokenizer.detokenize(prediction) for prediction in extract_prediction(result)]

I see… I will test this soon and let you know if and how tokenization can be performed server-side with the config files.

@guillaumekln: how can I enable logging for serving? With v1 I could see the connections (times, IPs,etc) without enabling any logging options, but now I can’t.

In Python 3, writes to stderr are buffered by default. Maybe you can try starting the Docker image with -e PYTHONUNBUFFERED=1

1 Like

Perfect, that works, thanks a lot!

“client side” is confusing, sorry. The code I posted above was taken from a client script but actually is executed “server side” in a Flask proxy or intermediary server which sits between various clients and the model server. This intermediary server is needed to direct traffic because I am still serving some Torch models alongside TensorFlow models.

Hi Panos, Just to add to this post. As @guillaumekln has mentioned elsewhere we can export a TF1.x checkpoint in TF2.x and serve the model in TF2 from the model configuration list. I have just done this on my test server to prove to myself it can be done :slight_smile:

Great, thanks for the info Terence! I got a few models that need conversion.