Does word embedding size change when we use word features?

Hi everyone, as i mention on the title, does word embedding size change when we add word features ?
For example, i use word features such as lemma, pos tag, word cluster and dependency parsing with 128, 64, 64, 64 embedding size respectively. So if the default embed dim is 500, when we use word features, the total embedding dim become 500 + 128 + 64 * 3 = 832 ?


Yes, they are concatenated.

I’ve read in this paper: Linguistic input features improve neural machine translation.. On page 86, bottom-right paragraph, it said:

To ensure that performance improvements are not simply due to an increase in the number of model parameters, we keep the total size of the embedding layer fixed to 500

So i’m kinda confused if opennmt use linguistic features and embedding dim increase with feature dim size

As I said, the embeddings are concatenated. It’s left to the user to control the embeddings size with -word_vec_size and -feat_vec_size.