ValueError: Cannot use sin/cos positional encoding with odd dim (got dim=515)

Hi,
I’m getting the following error when training a transformer with source features. I only have one feature per token.

ValueError: Cannot use sin/cos positional encoding with odd dim (got dim=515)

You probably want to set both --word_vec_size and --feat_vec_size to make the total embedding size an even size.

So, if --word_vec_size is 512 and I have 3 possible values for the feature it sums up to 515. Therefore, I should set --feat_vec_size for example to 8, in order to have a 520 dimension. Am I right?

Yes, this should work.

Thanks @guillaumekln !