Same source/target embeddings for intralingual task

It seems logical to learn only one set of word embeddings for intralingual tasks (paraphrase, summarization, chatbots, spell checking, etc.).

People generally don’t do this. It might save you some memory in the model, but I haven’t seen it improve results.

Actually, I tried this strategy and then the results got worse.