ValueError: not enough values to unpack (expected 2, got 1)

After the build vocab step
when I am starting NMT trainnig , getting the below error
Loading vocab from text file…
[2021-02-09 04:02:27,961 INFO] Loading src vocabulary from data/en-to-bn-aai4b-1.1/en-to-bn-aai4b-1.1.vocab.src
[2021-02-09 04:02:28,080 INFO] Loaded src vocab has 41327 tokens.
Traceback (most recent call last):
File “/home/ubuntu/python_virtual_env/nmt_training_env/bin/onmt_train”, line 8, in
sys.exit(main())
File “/home/ubuntu/python_virtual_env/nmt_training_env/lib/python3.6/site-packages/onmt/bin/train.py”, line 169, in main
train(opt)
File “/home/ubuntu/python_virtual_env/nmt_training_env/lib/python3.6/site-packages/onmt/bin/train.py”, line 103, in train
checkpoint, fields, transforms_cls = _init_train(opt)
File “/home/ubuntu/python_virtual_env/nmt_training_env/lib/python3.6/site-packages/onmt/bin/train.py”, line 80, in _init_train
fields, transforms_cls = prepare_fields_transforms(opt)
File “/home/ubuntu/python_virtual_env/nmt_training_env/lib/python3.6/site-packages/onmt/bin/train.py”, line 34, in prepare_fields_transforms
opt, src_specials=specials[‘src’], tgt_specials=specials[‘tgt’])
File “/home/ubuntu/python_virtual_env/nmt_training_env/lib/python3.6/site-packages/onmt/inputters/fields.py”, line 34, in build_dynamic_fields
min_freq=opts.src_words_min_frequency)
File “/home/ubuntu/python_virtual_env/nmt_training_env/lib/python3.6/site-packages/onmt/inputters/inputter.py”, line 309, in _load_vocab
for token, count in vocab:
ValueError: not enough values to unpack (expected 2, got 1)

Havent been able to figure it out why it is happening.

data:
corpus:
path_src: data/src_train.txt
path_tgt: data/tgt_train.txt
transforms: [sentencepiece,filtertoolong]
valid:
path_src: data/src_dev.txt
path_tgt: data/tgt_dev.txt
transforms: [sentencepiece,filtertoolong]

src_subword_model: model/sentencepiece_models/en_32000.model
tgt_subword_model: model/sentencepiece_models/ta_32000.model

src_seq_length: 200
tgt_seq_length: 200

skip_empty_level: silent

save_model: model/model_
save_checkpoint_steps: 10000
train_steps: 150000
valid_steps: 10000
tensorboard: true
tensorboard_log_dir: runs/onmt

world_size: 1
gpu_ranks: [0]
batch_type: “tokens”
batch_size: 4096
max_generator_batches: 2
accum_count: [2]

normalization: “tokens”
optim: “adam”
learning_rate: 0.25
adam_beta2: 0.998
decay_method: “noam”
warmup_steps: 8000
max_grad_norm: 0
param_init: 0
param_init_glorot: true
label_smoothing: 0.1

encoder_type: transformer
decoder_type: transformer
layers: 6
heads: 8
rnn_size: 512
word_vec_size: 512
transformer_ff: 2048
dropout: [0.1]
position_encoding: true

Yes this worked for me too. Quite surprising was gettting blanks in pplace of

1 Like

I’m still having the same problem. With the following:

!spm_train --input=train.txt --model_prefix=myspm --vocab_size=16000 --character_coverage=1 --model_type=bpe
!onmt_build_vocab -config basic.yaml -n_sample -1
!onmt_train -config basic.yaml

The first 2 instructions are fine but onmt_train leads to following error:

[2021-02-22 16:35:17,337 INFO] Parsed 2 corpora from -data.
[2021-02-22 16:35:17,337 INFO] Get special vocabs from Transforms: {‘src’: set(), ‘tgt’: set()}.
[2021-02-22 16:35:17,337 INFO] Loading vocab from text file…
[2021-02-22 16:35:17,337 INFO] Loading src vocabulary from data/vocab.src
[2021-02-22 16:35:17,360 INFO] Loaded src vocab has 10600 tokens.
Traceback (most recent call last):
File “/usr/local/bin/onmt_train”, line 33, in
sys.exit(load_entry_point(‘OpenNMT-py’, ‘console_scripts’, ‘onmt_train’)())
File “/content/OpenNMT-py/onmt/bin/train.py”, line 169, in main
train(opt)
File “/content/OpenNMT-py/onmt/bin/train.py”, line 103, in train
checkpoint, fields, transforms_cls = _init_train(opt)
File “/content/OpenNMT-py/onmt/bin/train.py”, line 80, in _init_train
fields, transforms_cls = prepare_fields_transforms(opt)
File “/content/OpenNMT-py/onmt/bin/train.py”, line 34, in prepare_fields_transforms
opt, src_specials=specials[‘src’], tgt_specials=specials[‘tgt’])
File “/content/OpenNMT-py/onmt/inputters/fields.py”, line 34, in build_dynamic_fields
min_freq=opts.src_words_min_frequency)
File “/content/OpenNMT-py/onmt/inputters/inputter.py”, line 309, in _load_vocab
for token, count in vocab:
ValueError: not enough values to unpack (expected 2, got 1)

I have gone through both vocab.src and vocab.tgt and they both seem fine i.e. 2 columns with no blanks. The files are large so it’s possible I missed something.

Colab cell output from onmt_build_vocab is:

[2021-02-22 16:33:57,047 INFO] Counter vocab from -1 samples.
[2021-02-22 16:33:57,047 INFO] n_sample=-1: Build vocab on full datasets.
[2021-02-22 16:33:57,073 INFO] corpus_1’s transforms: TransformPipe(SentencePieceTransform(share_vocab=False, src_subword_model=myspm.model, tgt_subword_model=myspm.model, src_subword_alpha=0.0, tgt_subword_alpha=0.0, src_subword_vocab=, tgt_subword_vocab=, src_vocab_threshold=0, tgt_vocab_threshold=0, src_subword_nbest=1, tgt_subword_nbest=1), FilterTooLongTransform(src_seq_length=150, tgt_seq_length=150))
[2021-02-22 16:33:57,077 INFO] Loading ParallelCorpus(src-train.txt, tgt-train.txt, align=None)…
[2021-02-22 16:34:24,084 INFO] Transform statistics for corpus_1:
Filtred sentence: 3 sent
Subword(SP/Tokenizer): 6383580 → 7504481 tok
[2021-02-22 16:34:24,109 INFO] Counters src:10600
[2021-02-22 16:34:24,109 INFO] Counters tgt:12956

Is there a fix for this, a workaround or am I missing something ? Thanks. Séamus.

check your vocabulary file, there must be a line with special space character
捕获
that triggers the error after the line.strip()
the fix is to cleanse your data set.