When i using the OpenNMT-tf,I can add pretrained Embedding,can someone help me?
Hello,
In the model configuration, you should first declare that your WordEmbedder
uses pretrained embeddings, e.g.:
source_inputter=onmt.inputters.WordEmbedder(
vocabulary_file_key="source_word_vocabulary",
embedding_file_key="source_word_embeddings")
(See the WordEmbedder constructor for additional options.)
In the run configuration, you should give the path to the pretrained embeddings, e.g.:
data:
source_words_vocabulary: data/vocab.txt
source_words_embeddings: data/glove.txt
See also the documentation of the load_pretrained_embeddings
function to learn about the file format and loading behavior.
1 Like
Can I use target_words_embeddings instead of source? What possible changes it could make in output?
Can I use target_words_embeddings instead of source?
Yes it’s the same syntax for the target.
What possible changes it could make in output?
Most likely, it will only make the training converge faster.