Does this proper name is an unknown word for your models?
This is, if you translate the sentences without the '-replace_unk' option,
does it appears an
'<unk>' label? How many '
<unk>' do they generate?
Maybe this difference has to do with the alignments learned by the attention mechanism for each model, in the first one it can recover the entire name but in the second it only can recover only the first name.
Another explanation is that the second model just generates a translation in the form
'<unk> wordt voorzitter', so it is expectable that it only introduce one word
Have you tried other instance of your models from other epochs? Does it repeats this behaviour thorough the translations in general or is it a particular "isolated" example?
Maybe at other point of the training -with a model from other epoch- it can translate this sentence better.
If it is a general error you can try to continue the training of your model using sentence examples of this kind (Name1 Name2 verb object) to refine the behaviour of your model.