Specifying input feed required to use global attention?

Hello,

I am new to NMT and am looking at the possible features the framework provides. As I was reading train.py documentation, I stumbled upon the input_feed feature that states “Feed the context vector at each time step as an additional input (via concatenation with the word embeddings) to the decoder”. I also noticed that there is the global_attention feature that allows you to configure which attention to use. My question is input_feed required to be specified if global_attention is desired to be used?

Thanks

Hello, there is no dependency - these are two independent features.

Thank you so much for responding @jean.senellart. So if I understand this correctly, specifying input_feed will only effect training phase but not during translating phase since teacher forcing is already being applied during training as mentioned here. So if I only used global_attention without input feed, the context vector will only be used during translating phase? Am I correct?

Best,
Dillon