Hello I was just wondering if using word2vec embeddings is exactly the same process as with glove.
my word2vec text file has one line that states vocab and vector size. Then it is followed by lines of vectors (of size 100 in this case). Also I have only created embeddings for the target language.
So far so good, however do I need to do anything extra to this file or can I just do this:
_./tools/embeddings_to_torch.py -emb_file “myWord2vec_emb.txt” _
_-dict_file “data/data.vocab.pt” _
then do I just add these lines to the train.py command:
_-word_vec_size 100 _
_-pre_word_vecs_enc “data/embeddings.enc.pt” _
_-pre_word_vecs_dec “data/embeddings.dec.pt” _