Logging the BLEU metric during training?

I’ve been following the tutorials to train a Transformer topology on the toy German/English corpus. It logs the loss after every 50 steps, but I was wondering if there was a way to also print out the BLEU score for the validation dataset as well? Is there a flag for that? Could someone show me an example?

Also, during training I get the following metrics:

INFO:tensorflow:global_step/sec: 0.733885
INFO:tensorflow:words_per_sec/features: 1063.43
INFO:tensorflow:words_per_sec/labels: 1056.99

I know that global_step/sec is the speed of one forward/backward pass through the network. What do the “words_per_sec/features” and “words_per_sec/labels” mean?

Thanks so much.

Looks like you want to use the external_evaluator option from the eval section:

It reports the BLEU score on the tokenized translation every eval_delay seconds when using the train_and_eval run type.

I am also looking for an asnwer to this. Is there anything similar in OpenNMT-py?

This is not implemented in OpenNMT-py (https://github.com/OpenNMT/OpenNMT-py/issues/1158).