How can I reset the position_encoding Tensor as a trainable Parameter?

I tired to creat a new obj for a trainable PositionalEncoding in embeddings modules.
It looks work, but when i load the saved .pt model i can not find the corresponding trainable parameters.
It does not exist in the model.state_dict() obj ! Then I am not sure if the new trainable tensor work or not.
below is my code of new obj:

class PositionalEncoding_Train(nn.Module):
def init(self, dropout, dim, max_len=5000):
if dim % 2 != 0:
raise ValueError("Cannot use trainable positional encoding with "
“odd dim (got dim={:d})”.format(dim))
pe = torch.rand(max_len, dim) * 2 - 1
pe = nn.Parameter(pe)
self.pe = pe.unsqueeze(1)
super(PositionalEncoding_Train, self).init()
self.dropout = nn.Dropout(p=dropout)
self.dim = dim
def forward(self, emb, step=None):
“”“Embed inputs.
Args:
emb (FloatTensor): Sequence of word vectors
(seq_len, batch_size, self.dim)
step (int or NoneType): If stepwise (seq_len = 1), use
the encoding for this position.
“””
emb = emb * math.sqrt(self.dim)
if step is None:
emb = emb + self.pe[:emb.size(0)].to(device)
# emb = emb + self.pe[:emb.size(0)]
else:
emb = emb + self.pe[step].to(device)
# emb = emb + self.pe[step]
emb = self.dropout(emb)
return emb

It is added into the Embeddings.make_embedding, but not like the “emb_luts” module, which can be found in the model.state_dict().

I wish someone can figure out what I missed.