doc2vec provides you an embedding for an entire document (or batch of sentences) capturing in this way the document context information.
You can always get this document representation as a first step before starting the translation process of your document, either with a transformer or a RNN encoder-decoder based system, where you can introduce modifications to take into account this document vector representation.
Also, you can play around with the wide of the context you use, taking into account document entire context or batch context, thinking on how topic can vary inside the same document.
As far as I know, Elmo embeddings only capture sentence context, this is, they ignore inter-sentence information. Recall that the NMT systems do handle inter-sentence context from how they build the source sentence representations before passing this info to the decoder. However, I am not aware of any approach that used ElMo embeddings to check if changing this word representations can help systems to better manage some sentence context information.
I am quite interested in document-level MT so, feel free to write me a pm any time to keep on discussing/talking on this topic.