OpenNMT Forum

Using predict accuracy with n_best


I have two questions about OpenNMT-tf inference.

1 - I can get 5 results using n_best for each predict. But I can’t get accuracy value for each predict. How do I get the accuracy value for each predict?
2 - How do I use Beam Search in my code?


My Code:

import os
import numpy as np
import tensorflow as tf
from tensorflow.contrib.seq2seq.python.ops import beam_search_ops

dirExport = "run/export/latest/1234567890"
dirData = "processing\\process.txt"
dirSave = "processing\\result.txt"
maxSequenceLength = 50
nBest = 5

sess = tf.Session()
with sess.as_default():
    signature_def = tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], dirExport).signature_def["serving_default"]
    input_tokens = signature_def.inputs["tokens"].name
    input_length = signature_def.inputs["length"].name
    output_tokens = signature_def.outputs["tokens"].name
    output_length = signature_def.outputs["length"].name

if __name__ == "__main__":
    with open(dirData, 'r', encoding='utf-8') as o:
        lines = o.readlines()
        words = ''
        c = 0
        t = []
        l = []

        for k, line in enumerate(lines):
            letters = list(line.replace("\n", "").strip())
            letterTmp = letters + ([""] * (maxSequenceLength - len(letters)))

        batch_tokens, batch_length =[output_tokens, output_length], feed_dict={input_tokens: t, input_length: l})

        for tokens, length in zip(batch_tokens, batch_length):
            if nBest <= len(tokens):
                for i in range(nBest):
                    tokens_, length_ = tokens[i], length[i]
                    length_ -= 1
                    word = ''

                    for t in tokens_[:length_]:
                        word += t.decode('utf-8')
                    words += lines[c].replace("\n", "").strip() + "=" + word + "\n"
            c += 1

        with open(dirSave, 'w+', encoding='utf-8') as f:


  1. What do you mean by accuracy in the context of inference?
  2. If the exported model outputs 5 predictions, it is already running beam search.

Firstly, thanks for your answer.

I need possibility rate (accuracy rate) each predict during inference.

See the log_probs field in the output signature.

Thank you. This worked.

I add
output_probs = signature_def.output[“log_probs”].name and
batch_tokens, batch_length, batch_probs =[output_tokens, output_length, output_probs], feed_dict={input_tokens: t, input_length: l})

Hopefully help others.