A slightly different result in translation with various batch sizes

Hi,
I am experimenting with around 25 short sentences. I have set beam_size = 5 for translation.

If the batch_size is small, I noticed that the translated sentences are similar to what I expect. But, if batch_size is large, I am noticing that some of the translated sentences are different than what I get with small batch_size. Is this an expected behavior? If so, how is the quality of translation related to batch_size? I am a little confused by this behavior…

Thanks,
Arbin

what pytorch version?

It’s PyTorch 1.2 and OpenNMT-py 1.0.0rc1.

then no it is not expected. We noticed this on 1.3.0 (it’s buggy)
Can you try master for OpenNMT-py ? no code modification?

My current setup is using OpenNMT as a library and it seems like the master has added
full_context_alignment and alignment_layer. So changing my code might take some time. I will try to test it out in some independent manner.
Btw, I just checked BLEU score in my test set for batch sizes 5 and 10, although not huge, there is definitely some difference.

@vince62s, I tested with the master for OpenNMT-py and I am seeing the same issue. Any idea what’s going on?

Also, has anyone else come across this issue? It is quite easy to replicate: changing the batch size of the translator changes the BLEU score (slightly).

Yes I recall (I think)
This is due to the early stop in beam search.
Read this for more info: https://github.com/OpenNMT/OpenNMT-py/issues/1320
We decided to keep this behavior for performance reasons (speed).
If you really want to search until beam completion, there is a hint on how to do it.

@vince62s, thanks for the info.