The encoder and decoder use different networks and an error occurs

encoder:transformer
decoder: rnn

[2024-03-02 15:27:57,550 INFO] Missing transforms field for corpus_1 data, set to default: [].
[2024-03-02 15:27:57,550 WARNING] Corpus corpus_1’s weight should be given. We default it to 1 for you.
[2024-03-02 15:27:57,550 INFO] Missing transforms field for valid data, set to default: [].
[2024-03-02 15:27:57,551 INFO] Parsed 2 corpora from -data.
[2024-03-02 15:27:57,551 INFO] Get special vocabs from Transforms: {‘src’: set(), ‘tgt’: set()}.
[2024-03-02 15:27:57,551 INFO] Loading vocab from text file…
[2024-03-02 15:27:57,551 INFO] Loading src vocabulary from run/ch-zh.vocab.src
[2024-03-02 15:27:57,772 INFO] Loaded src vocab has 45992 tokens.
[2024-03-02 15:27:57,832 INFO] Loading tgt vocabulary from run/ch-zh.vocab.tgt
[2024-03-02 15:27:57,960 INFO] Loaded tgt vocab has 30464 tokens.
[2024-03-02 15:27:57,992 INFO] Building fields with vocab in counters…
[2024-03-02 15:27:58,215 INFO] * tgt vocab size: 30468.
[2024-03-02 15:27:58,341 INFO] * src vocab size: 45994.
[2024-03-02 15:27:58,345 INFO] * src vocab size = 45994
[2024-03-02 15:27:58,345 INFO] * tgt vocab size = 30468
[2024-03-02 15:27:58,362 INFO] Building model…
[2024-03-02 15:28:03,938 INFO] NMTModel(
(encoder): TransformerEncoder(
(embeddings): Embeddings(
(make_embedding): Sequential(
(emb_luts): Elementwise(
(0): Embedding(45994, 512, padding_idx=1)
)
(pe): PositionalEncoding(
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(transformer): ModuleList(
(0-5): 6 x TransformerEncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_keys): Linear(in_features=512, out_features=512, bias=True)
(linear_values): Linear(in_features=512, out_features=512, bias=True)
(linear_query): Linear(in_features=512, out_features=512, bias=True)
(softmax): Softmax(dim=-1)
(dropout): Dropout(p=0.1, inplace=False)
(final_linear): Linear(in_features=512, out_features=512, bias=True)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=512, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=512, bias=True)
(layer_norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
(dropout_1): Dropout(p=0.1, inplace=False)
(relu): ReLU()
(dropout_2): Dropout(p=0.1, inplace=False)
)
(layer_norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(layer_norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
)
(decoder): InputFeedRNNDecoder(
(embeddings): Embeddings(
(make_embedding): Sequential(
(emb_luts): Elementwise(
(0): Embedding(30468, 512, padding_idx=1)
)
(pe): PositionalEncoding(
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(dropout): Dropout(p=0.1, inplace=False)
(rnn): StackedLSTM(
(dropout): Dropout(p=0.1, inplace=False)
(layers): ModuleList(
(0): LSTMCell(1024, 512)
(1-5): 5 x LSTMCell(512, 512)
)
)
(attn): GlobalAttention(
(linear_in): Linear(in_features=512, out_features=512, bias=False)
(linear_out): Linear(in_features=1024, out_features=512, bias=False)
)
)
(generator): Sequential(
(0): Linear(in_features=512, out_features=30468, bias=True)
(1): Cast()
(2): LogSoftmax(dim=-1)
)
)
[2024-03-02 15:28:03,943 INFO] encoder: 42464256
[2024-03-02 15:28:03,943 INFO] decoder: 45672196
[2024-03-02 15:28:03,943 INFO] * number of parameters: 88136452
[2024-03-02 15:28:03,949 INFO] Starting training on GPU: [0]
[2024-03-02 15:28:03,949 INFO] Start training loop and validate every 1000 steps…
[2024-03-02 15:28:03,950 INFO] corpus_1’s transforms: TransformPipe()
[2024-03-02 15:28:03,951 INFO] Loading ParallelCorpus(dataset/src-train.txt, dataset/tgt-train.txt, align=None)…
[2024-03-02 15:28:07,628 INFO] Loading ParallelCorpus(dataset/src-train.txt, dataset/tgt-train.txt, align=None)…
Traceback (most recent call last):
File “/usr/local/miniconda3/bin/onmt_train”, line 8, in
sys.exit(main())
File “/usr/local/miniconda3/lib/python3.8/site-packages/onmt/bin/train.py”, line 169, in main
train(opt)
File “/usr/local/miniconda3/lib/python3.8/site-packages/onmt/bin/train.py”, line 154, in train
train_process(opt, device_id=0)
File “/usr/local/miniconda3/lib/python3.8/site-packages/onmt/train_single.py”, line 102, in main
trainer.train(
File “/usr/local/miniconda3/lib/python3.8/site-packages/onmt/trainer.py”, line 242, in train
self._gradient_accumulation(
File “/usr/local/miniconda3/lib/python3.8/site-packages/onmt/trainer.py”, line 366, in _gradient_accumulation
outputs, attns = self.model(
File “/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
return forward_call(*args, **kwargs)
File “/usr/local/miniconda3/lib/python3.8/site-packages/onmt/models/model.py”, line 49, in forward
dec_out, attns = self.decoder(dec_in, memory_bank,
File “/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
return forward_call(*args, **kwargs)
File “/usr/local/miniconda3/lib/python3.8/site-packages/onmt/decoders/decoder.py”, line 213, in forward
dec_state, dec_outs, attns = self._run_forward_pass(
File “/usr/local/miniconda3/lib/python3.8/site-packages/onmt/decoders/decoder.py”, line 391, in _run_forward_pass
rnn_output, dec_state = self.rnn(decoder_input, dec_state)
File “/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
return forward_call(*args, **kwargs)
File “/usr/local/miniconda3/lib/python3.8/site-packages/onmt/models/stacked_rnn.py”, line 23, in forward
h_0, c_0 = hidden
ValueError: not enough values to unpack (expected 2, got 1)

config

batch_type: “sents”
batch_size: 128
valid_batch_size: 64
accum_count: [4]
accum_steps: [0]

Optimization

model_dtype: “fp16”
optim: “adam”
learning_rate: 1e-4
warmup_steps: 8000

decay_method: “noam”

adam_beta1: 0.9
adam_beta2: 0.998
label_smoothing: 0.1
normalization: “sents”

Model

encoder_type: transformer
decoder_type: rnn
position_encoding: true
global_attention: general
enc_layers: 6
dec_layers: 6
word_vec_size: 512
rnn_size: 512
head: 4
dropout_steps: [0]
dropout: [0.1]

When the encoder and decoder use the same network, they can be trained normally
Hope to help, thanks