Beam search understanding

Hi, having difficulties on modifying the beam search so it will only output unique terms, meaning that if the first word generated is “hi”, it will never output “hi” again, such that it will consider the 2nd highest candidate instead for the beam if “hi” is first.

Tried looking at advancer.lua and others but doesn’t seem straightforward to achieve this.

Also, if my prediction outcome has a max length of 21.
Does this mean my beam search should be 21? But it won’t run and show out of memory.

Hi,

Here you have the scores outputted by the model before normalization:

You could bias this Tensor of size [batch x beam x vocab] to force or prevent the prediction of a set of words.

The beam size is not related to the output length. The prediction stops when the top beam predicts the end of sentence token (you can bias that too) or when the length is longer than -max_seq_length.

1 Like

I don’t fully understand the scope of the pre-filter-factor option, but maybe this is something that could be used to discard translation hypotheses based on a customised filter?

If the hypotheses filtering is aggressive you may want to consider more hypotheses before applying it by using the -pre_filter_factor option. This way, after the filter you can still work with -beam_size hypotheses.