[OpenNMT-tf] Multitask learning support


since OpenNMT-tf has a modular structure, and it’s possible to comfortably construct a model with several encoders,
it would be logical to have an interface to add several decoders as well.

E.g. input a single sentence -> output pos tags and translation of the sentence.

What are your thoughts on this?



Yes, that would be a great addition! We could add a ParallelDecoder that does this in the same fashion as ParallelEncoder.

It should be relatively straightforward but there are some details to take care of:

  • support separate values for the decoding parameters (beam_width, length_penalty, etc.),
  • parts of SequenceToSequence assume a single output head (e.g. loss computation, reverse vocabulary lookup, exported outputs for model serving)

Are you interested to work on this?