Extracting attention weights of each translated sentance using OpenNMT.py

Is there any way in OpenNMT PyTorch version to save attention weights of each translated sentence.

Although I found a thread for a similar issue, it’s for lua version. In lua direct -save_attention parameter to save the values to a file.

Can anyone guide me on how to do it for PyTorch version??

I want something like:
1 ||| source sentence ||| score ||| target sentence tokenized ||| number number
Matrix of weights.

Many Thanks in Advance

Did you check if Getting attention weights helps?

@guillaumekln I tried this but didn’t help much. Any other method??

I also found a parameter in documenation -attn_debug, can you please explain about it a bit.

Thanks

Hi @Ravneet,
As Guilaume mentionned, a similar question was answered here:

As said above, the -attn_debug flag should be what you’re looking for (it outputs some src / target attention matrix in text format). Yet, depending on your architecture you might need to adapt it. E.g. for Transformer, it’ll only output the first attention head IIRC.