Syntactic model using UD-Trees for machine translation task with OpenNMT

I wonder how could I implement with OpenNMT a syntactic transformer (or something more suitable) and train UD-graphs as input and output data: Lemma, Head, Dependency relation, UTAG, and maybe Feats (e.g. Gender or\and Number)?

Have anyone idea how much BLEU could be gained using grammar graphs over tokens embeddings with word features embeddings as Lemma or\and UTAG?

I know that I can use Lemmas, UTAG, Feats as an additional features on source and target words (By the way, any experience of BLEU gaining such way?). But it could be more interesting bringing Heads and Dependency relation on the scene.

Thus, the translation pipeline could be:

  1. Parsing Universal Dependencies (UPOS, Feats, Lemmas, etc) from a sentence (e.g. Bert self-attention model or Biaffine attention)
  2. Feed parsed tree to OpenNMT model.