It all depends on the training data and what the model has learned.
Generally if a source word is out of vocabulary, it is likely that the network will translate it by an unknown word and then the
-replace_unk feature might be able to copy it in the target. But remember that it is not doing a word by word translation so it might just drop it and the copy might also fail because the source word is unknown.
A more consistent approach is to apply the same logic as the named entities on the training data:
Personal thoughts: you could just build a radix tree with all the words you can translate and query it before each neural translation.