I was wondering how extra information can be passed to the Transformer during the training phase. I guess there were some features in the previous version of OpenNMT-py. However, as far as I know this functionality was removed in v2.0. Is there any way I can add information, like positional encoding is added to input embeddings?
What did the former features do (concat/add…) with the input embeddings? Which do you think it is the best approach to add extra information about tokens (POS tagging, classification labels…)?