Hi @Ravneet,
As Guilaume mentionned, a similar question was answered here:
As said above, the -attn_debug flag should be what you’re looking for (it outputs some src / target attention matrix in text format). Yet, depending on your architecture you might need to adapt it. E.g. for Transformer, it’ll only output the first attention head IIRC.