I am trying to use pre-trained subword embeddings the same we use normal embeddings for my NMT model.
i am getting this
enc: 1 match, 31194 missing, (0.00%)
dec: 3 match, 29067 missing, (0.01%)
if match is so less, what happens after this? Does my original embeddings goes into training as 0% match is found or new pretrained embeddings are also put into training?
@guillaumekln @tel34 @vince62s ny help?
Sorry, can you clarify your question?
@guillaumekln I mean when using pertained embeddings I get the below results i.e almost 0% match for both encoder and decoder embeddings. Does it mean, I won’t have any advantage of using pre trained embeddings? As match % is 0, the randomly initialised weights will be used and not the pretrained weights?
enc: 1 match, 31194 missing, (0.00%)
dec: 3 match, 29067 missing, (0.01%)
Correct, it is useless to use pretrained embedding with so little match.