Where are the additional options for the base model?

The about section says that there are additional options on top of the baseline model, and a list of papers have been provided whose implementations are said to be available.
Can someone kindly tell me where can I find / use these “options” on top of the base Seq2Seq model ?
More specifically I want to experiment with :

local attention (local-m / local-p)
Character based word embeddings
Fast forward connections

Any help is highly appreciated :slight_smile:

Hi,

This list is a bit misleading as there are not necessarily strict implementations of all these papers, more like ideas coming from them. The particular features you listed are not currently (or yet) in the public code.