Is it normal to see "Weighted corpora loaded so far" in a loop during the finetuning phase on a very small dataset?

Hello, today I’m trying to finetune a tiny transformer model trained on a translation task on a small dataset of very good parallel sentences that comes from flores-200 (I know that this dataset is only centered on evaluation task but I need high-quality parallel sentences, this model will not be used in production).

So I have already trained the model on 200k sentences from wikipedia and now I’m trying to finetune it on 2k sentences but I have these logs in loops:

[2023-07-06 21:59:25,144 INFO] Weighted corpora loaded so far:
                        * corpus_1: 65
[2023-07-06 21:59:25,145 INFO] Weighted corpora loaded so far:
                        * corpus_1: 64
[2023-07-06 21:59:25,149 INFO] Weighted corpora loaded so far:
                        * corpus_1: 66
[2023-07-06 21:59:25,151 INFO] Weighted corpora loaded so far:
                        * corpus_1: 65
[2023-07-06 21:59:25,155 INFO] Weighted corpora loaded so far:
                        * corpus_1: 67
[2023-07-06 21:59:25,157 INFO] Weighted corpora loaded so far:
                        * corpus_1: 66
[2023-07-06 21:59:25,163 INFO] Weighted corpora loaded so far:
                        * corpus_1: 68
[2023-07-06 21:59:25,163 INFO] Weighted corpora loaded so far:
                        * corpus_1: 67
[2023-07-06 21:59:25,169 INFO] Weighted corpora loaded so far:
                        * corpus_1: 69
[2023-07-06 21:59:25,171 INFO] Weighted corpora loaded so far:
                        * corpus_1: 68
[2023-07-06 21:59:25,174 INFO] Weighted corpora loaded so far:
                        * corpus_1: 70
[2023-07-06 21:59:25,177 INFO] Weighted corpora loaded so far:
                        * corpus_1: 69
[2023-07-06 21:59:25,180 INFO] Weighted corpora loaded so far:
                        * corpus_1: 71
[2023-07-06 21:59:25,182 INFO] Weighted corpora loaded so far:
                        * corpus_1: 70
[2023-07-06 21:59:25,187 INFO] Weighted corpora loaded so far:
                        * corpus_1: 72
[2023-07-06 21:59:25,190 INFO] Weighted corpora loaded so far:
                        * corpus_1: 71
[2023-07-06 21:59:25,193 INFO] Weighted corpora loaded so far:
                        * corpus_1: 73
[2023-07-06 21:59:25,196 INFO] Weighted corpora loaded so far:
                        * corpus_1: 72
[2023-07-06 21:59:25,199 INFO] Weighted corpora loaded so far:
                        * corpus_1: 74
[2023-07-06 21:59:25,202 INFO] Weighted corpora loaded so far:
                        * corpus_1: 73
[2023-07-06 21:59:25,207 INFO] Weighted corpora loaded so far:
                        * corpus_1: 75
[2023-07-06 21:59:25,208 INFO] Weighted corpora loaded so far:
                        * corpus_1: 74
[2023-07-06 21:59:25,212 INFO] Weighted corpora loaded so far:
                        * corpus_1: 76
[2023-07-06 21:59:25,215 INFO] Weighted corpora loaded so far:
                        * corpus_1: 75
[2023-07-06 21:59:25,218 INFO] Weighted corpora loaded so far:
                        * corpus_1: 77
[2023-07-06 21:59:25,222 INFO] Weighted corpora loaded so far:
                        * corpus_1: 76
[2023-07-06 21:59:25,224 INFO] Weighted corpora loaded so far:
                        * corpus_1: 78
[2023-07-06 21:59:25,227 INFO] Weighted corpora loaded so far:
                        * corpus_1: 77
[2023-07-06 21:59:25,233 INFO] Weighted corpora loaded so far:
                        * corpus_1: 78
[2023-07-06 21:59:25,319 INFO] Weighted corpora loaded so far:
                        * corpus_1: 79
[2023-07-06 21:59:25,324 INFO] Weighted corpora loaded so far:
                        * corpus_1: 80
[2023-07-06 21:59:25,328 INFO] Weighted corpora loaded so far:
                        * corpus_1: 79
[2023-07-06 21:59:25,330 INFO] Weighted corpora loaded so far:
                        * corpus_1: 81
[2023-07-06 21:59:25,334 INFO] Weighted corpora loaded so far:
                        * corpus_1: 80
[2023-07-06 21:59:25,336 INFO] Weighted corpora loaded so far:
                        * corpus_1: 82
[2023-07-06 21:59:25,340 INFO] Weighted corpora loaded so far:
                        * corpus_1: 81
[2023-07-06 21:59:25,343 INFO] Weighted corpora loaded so far:
                        * corpus_1: 83
[2023-07-06 21:59:25,347 INFO] Weighted corpora loaded so far:
                        * corpus_1: 82
[2023-07-06 21:59:25,349 INFO] Weighted corpora loaded so far:
                        * corpus_1: 84
[2023-07-06 21:59:25,353 INFO] Weighted corpora loaded so far:
                        * corpus_1: 83
[2023-07-06 21:59:25,355 INFO] Weighted corpora loaded so far:
                        * corpus_1: 85
[2023-07-06 21:59:25,359 INFO] Weighted corpora loaded so far:
                        * corpus_1: 84
[2023-07-06 21:59:25,362 INFO] Weighted corpora loaded so far:
                        * corpus_1: 86
[2023-07-06 21:59:25,365 INFO] Weighted corpora loaded so far:
                        * corpus_1: 85
[2023-07-06 21:59:25,368 INFO] Weighted corpora loaded so far:
                        * corpus_1: 87

and after a moment the training process just stop without logs like “step 400/3000…”.

Here is the start of the training process logs:

[2023-07-06 21:59:21,550 INFO] Missing transforms field for corpus_1 data, set to default: [].
[2023-07-06 21:59:21,550 WARNING] Corpus corpus_1's weight should be given. We default it to 1 for you.
[2023-07-06 21:59:21,550 INFO] Missing transforms field for valid data, set to default: [].
[2023-07-06 21:59:21,550 INFO] Parsed 2 corpora from -data.
[2023-07-06 21:59:21,550 INFO] Loading checkpoint from models/model.fren_step_52000.pt
[2023-07-06 21:59:21,889 INFO] Updating checkpoint vocabulary with new vocabulary
[2023-07-06 21:59:21,889 INFO] Get special vocabs from Transforms: {'src': [], 'tgt': []}.
[2023-07-06 21:59:22,053 INFO] The first 10 tokens of the vocabs are:['<unk>', '<blank>', '▁the', ',', '<s>', '</s>', '.', '▁of', '▁and', '▁in']
[2023-07-06 21:59:22,053 INFO] The decoder start token is: <s>
[2023-07-06 21:59:22,059 INFO] Building model...
[2023-07-06 21:59:22,601 INFO] Switching model to float32 for amp/apex_amp
[2023-07-06 21:59:22,601 INFO] Non quantized layer compute is fp16
[2023-07-06 21:59:22,601 INFO] Updating vocabulary embeddings with checkpoint embeddings
[2023-07-06 21:59:22,738 INFO] src: 0 new tokens
[2023-07-06 21:59:23,029 INFO] tgt: 0 new tokens
[2023-07-06 21:59:23,369 INFO] NMTModel(
  (encoder): TransformerEncoder(
    (embeddings): Embeddings(
      (make_embedding): Sequential(
        (emb_luts): Elementwise(
          (0): Embedding(50000, 512, padding_idx=1)
        )
        (pe): PositionalEncoding()
      )
      (dropout): Dropout(p=0.1, inplace=False)
    )
    (transformer): ModuleList(
      (0-5): 6 x TransformerEncoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=512, out_features=512, bias=False)
          (linear_values): Linear(in_features=512, out_features=512, bias=False)
          (linear_query): Linear(in_features=512, out_features=512, bias=False)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=512, out_features=512, bias=False)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, bias=False)
          (w_2): Linear(in_features=2048, out_features=512, bias=False)
          (layer_norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (dropout_1): Dropout(p=0.1, inplace=False)
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
        (dropout): Dropout(p=0.1, inplace=False)
      )
    )
    (layer_norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
  )
  (decoder): TransformerDecoder(
    (embeddings): Embeddings(
      (make_embedding): Sequential(
        (emb_luts): Elementwise(
          (0): Embedding(50000, 512, padding_idx=1)
        )
        (pe): PositionalEncoding()
      )
      (dropout): Dropout(p=0.1, inplace=False)
    )
    (layer_norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
    (transformer_layers): ModuleList(
      (0-5): 6 x TransformerDecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=512, out_features=512, bias=False)
          (linear_values): Linear(in_features=512, out_features=512, bias=False)
          (linear_query): Linear(in_features=512, out_features=512, bias=False)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=512, out_features=512, bias=False)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, bias=False)
          (w_2): Linear(in_features=2048, out_features=512, bias=False)
          (layer_norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (dropout_1): Dropout(p=0.1, inplace=False)
          (dropout_2): Dropout(p=0.1, inplace=False)
        )
        (layer_norm_1): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
        (dropout): Dropout(p=0.1, inplace=False)
        (context_attn): MultiHeadedAttention(
          (linear_keys): Linear(in_features=512, out_features=512, bias=False)
          (linear_values): Linear(in_features=512, out_features=512, bias=False)
          (linear_query): Linear(in_features=512, out_features=512, bias=False)
          (softmax): Softmax(dim=-1)
          (dropout): Dropout(p=0.1, inplace=False)
          (final_linear): Linear(in_features=512, out_features=512, bias=False)
        )
        (layer_norm_2): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
      )
    )
  )
  (generator): Linear(in_features=512, out_features=50000, bias=True)
)
[2023-07-06 21:59:23,370 INFO] encoder: 44487680
[2023-07-06 21:59:23,370 INFO] decoder: 76435280
[2023-07-06 21:59:23,370 INFO] * number of parameters: 120922960
[2023-07-06 21:59:23,371 INFO] Trainable parameters = {'torch.float32': 120922960, 'torch.float16': 0, 'torch.uint8': 0, 'torch.int8': 0}
[2023-07-06 21:59:23,371 INFO] Non trainable parameters = {'torch.float32': 0, 'torch.float16': 0, 'torch.uint8': 0, 'torch.int8': 0}
[2023-07-06 21:59:23,371 INFO]  * src vocab size = 50000
[2023-07-06 21:59:23,371 INFO]  * tgt vocab size = 50000
[2023-07-06 21:59:23,378 INFO] Starting training on GPU: [0]
[2023-07-06 21:59:23,378 INFO] Start training loop and validate every 1000 steps...
[2023-07-06 21:59:23,378 INFO] Scoring with: TransformPipe()
[2023-07-06 21:59:24,046 INFO] Weighted corpora loaded so far:
                        * corpus_1: 1
[2023-07-06 21:59:24,686 INFO] Weighted corpora loaded so far:
                        * corpus_1: 1
[2023-07-06 21:59:24,692 INFO] Weighted corpora loaded so far:
                        * corpus_1: 2

and here my configuration file

save_data: opennmt

# Training files
data:
    corpus_1:
        path_src: temps/src-flores.subword
        path_tgt: temps/tgt-flores.subword
    valid:
        path_src: temps/src-flores-val.subword
        path_tgt: temps/tgt-flores-val.subword
 
train_from: "models/model.fren_step_52000.pt"
update_vocab: true
reset_optim: all

# Vocabulary files, generated by onmt_build_vocab
src_vocab: opennmt/source.vocab
tgt_vocab: opennmt/target.vocab

# Vocabulary size - should be the same as in sentence piece
src_vocab_size: 50000
tgt_vocab_size: 50000

# Tokenization options
src_subword_model: models/source.model
tgt_subword_model: models/target.model

# Where to save the log file and the output models/checkpoints
log_file: temps/train.log
save_model: models/model.fren

save_checkpoint_steps: 500
train_steps: 3000
valid_steps: 1000

# Train on a single GPU
world_size: 1
gpu_ranks: [0]

keep_checkpoint: 3

seed: 3435

# Default: 4000 - for large datasets, try up to 8000
warmup_steps: 4000
report_every: 100

# Batching
bucket_size: 262144
batch_type: "tokens"
batch_size: 4096  
valid_batch_size: 2048
max_generator_batches: 2
accum_count: [4]
accum_steps: [0]

# Optimization
multiquery: true
add_ffnbias: false

model_dtype: "fp16"
optim: "adam"
learning_rate: 2
# warmup_steps: 8000
decay_method: "noam"
adam_beta2: 0.998
max_grad_norm: 0
label_smoothing: 0.1
param_init: 0
param_init_glorot: true
normalization: "tokens"

# Model
encoder_type: transformer
decoder_type: transformer
position_encoding: true
enc_layers: 2
dec_layers: 2
heads: 2
hidden_size: 256
word_vec_size: 256
transformer_ff: 64
dropout_steps: [0]
dropout: [0.1]
attention_dropout: [0.1]

The configuration file of my base model is exactly the same (without train_from: "models/model.fren_step_52000.pt" update_vocab: true reset_optim: all and different files for the dataset)

It happens only on small datasets, on bigger datasets I don’t have this issue.

My question is, is it normal? In a finetune there must be logs of the training process?

I hope I’ll find the answer, but for now I can only thank you for reading this far.

The issue is probably just the size of the dataset. You have a batch size of 4096 and a bucket size of 262,144. All the loading is literally because OpenNMT-py is trying to build the bucket of 262,144 tokens - which requires loading Flores200 a bunch of times as it’s such a small dataset.

1 Like

oh you are right! I just reduce bucket size to 144 and that work, thanks you a lot!

Hi!

I am facing the same issue. I recently upgraded to OpenNMT-py v3 and trying to do mixed fine-tuning a pre-trained model on a small dataset of 12k sentence (generic data size of ~40 million) and it is loading corpora around 600 times between the steps.


[2024-02-15 10:30:12,371 INFO] Step 400/105000; acc: 80.3; ppl:   8.7; xent: 2.2; lr: 0.00004; sents:   19947; bsz: 1936/2091/50; 7155/7727 tok/s;    488 sec;
[2024-02-15 10:30:43,076 INFO] * Transform statistics for corpus_2(12.50%):
			* SubwordStats: 5241577 -> 7618176 tokens
			* FilterTooLongStats(filtered=161)

[2024-02-15 10:30:43,076 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 163
[2024-02-15 10:30:43,137 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 164
[2024-02-15 10:30:43,177 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 165
[2024-02-15 10:30:43,217 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 166
[2024-02-15 10:30:43,257 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 167
[2024-02-15 10:30:43,296 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 168
[2024-02-15 10:30:43,338 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 169
[2024-02-15 10:30:43,376 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 170
[2024-02-15 10:30:43,414 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 171
[2024-02-15 10:30:43,452 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 172
[2024-02-15 10:30:43,492 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 173
[2024-02-15 10:30:43,530 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 174
[2024-02-15 10:30:43,572 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 175
[2024-02-15 10:30:43,608 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 176
[2024-02-15 10:30:43,644 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 177
[2024-02-15 10:30:43,679 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 178
[2024-02-15 10:30:43,715 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 179
[2024-02-15 10:30:43,752 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 180
[2024-02-15 10:30:43,790 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 181
[2024-02-15 10:30:43,825 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 182
[2024-02-15 10:30:43,860 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 183
[2024-02-15 10:30:43,896 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 184
[2024-02-15 10:30:43,931 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 185
[2024-02-15 10:30:43,967 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 186
[2024-02-15 10:30:44,002 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 187
[2024-02-15 10:30:44,040 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 188
[2024-02-15 10:30:44,075 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 189
[2024-02-15 10:30:44,112 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 190
[2024-02-15 10:30:44,121 INFO] * Transform statistics for corpus_2(12.50%):
			* SubwordStats: 5215276 -> 7630284 tokens
			* FilterTooLongStats(filtered=162)

[2024-02-15 10:30:44,122 INFO] Weighted corpora loaded so far:
			* corpus_1: 2
			* corpus_2: 163

However, with OpenNMT version 2, it was loading only 3-4 times

[2024-02-14 18:12:33,229 INFO] Step 400/105000; acc:  82.77; ppl:  1.89; xent: 0.64; lr: 0.00004; 4856/4700 tok/s;    504 sec;
[2024-02-14 18:13:23,694 INFO] * Transform statistics for corpus_2(25.00%):
                        * SubwordStats: 79788 -> 120074 tokens
                        * FilterTooLongStats(filtered=2)

[2024-02-14 18:13:23,694 INFO] Weighted corpora loaded so far:
                        * corpus_1: 1
                        * corpus_2: 13
[2024-02-14 18:13:56,191 INFO] * Transform statistics for corpus_2(25.00%):
                        * SubwordStats: 40033 -> 60626 tokens

[2024-02-14 18:13:56,192 INFO] Weighted corpora loaded so far:
                        * corpus_1: 1
                        * corpus_2: 14
[2024-02-14 18:14:43,116 INFO] Step 500/105000; acc:  84.40; ppl:  1.78; xent: 0.58; lr: 0.00004; 4757/4625 tok/s;    634 sec;

The exact same configuration is being used while using both the OpenNMT version (except rnn_size)

early_stopping: 30

keep_checkpoint: 10
seed: 3435
valid_steps: 1000
save_checkpoint_steps: 1000
warmup_steps: 4000


decoder_type: transformer
encoder_type: transformer
word_vec_size: 1024
  #rnn_size: 1024
hidden_size: 1024
layers: 6
transformer_ff: 4096
heads: 16

batch_size: 2480 
batch_type: tokens
normalization: tokens
dropout: 0.3
label_smoothing: 0.1
valid_batch_size: 2500

max_generator_batches: 2

eval_metrics: [BLEU, TER]
src_vocab: vocab.shared
src_subword_model: joint.model
share_vocab: 'true'
log_file: training.log
save_model: model/model
train_from: generic_model.pt
src_vocab_size: 70000
tgt_vocab_size: 70000
report_every: 100
data:
  corpus_1:
    path_src: 
      general.eng.train
    path_tgt: 
      general.fre.train
    transforms:
    - sentencepiece
    - filtertoolong
    weight: 1
  corpus_2:
    path_src: 
      indomain.eng.train
    path_tgt: 
      indomain.fre.train
    transforms:
    - sentencepiece
    - filtertoolong
    weight: 10
  valid:
    path_src: 
      eng.dev
    path_tgt: 
      fre.dev
    transforms:
    - sentencepiece
    - filtertoolong
train_steps: 105000.

I don’t see any performance issue or error but it just makes it tedious to monitor or debug the log. Is it a normal behavioral difference between v2 and v3?

your weights are not correct. if you indomain is very small and general domain dataset is big, you may need to reverse.
in your scenario for each single sentence of general it will load 10 sentences of your indomain. it will load it way too many times.

@vince62s I understand the data loading to be because of weights. The chosen weights are intentional. However, as I stated in the post, I didn’t see this behavior for OpenNMT-2. But only in OpenNMT-3. Anyways, is it possible to mute the corpora loading log so that the training can be monitored/debugged easily?

you can turn it off here:

but more specifically if you keep your weights just change the bucket size
the default 262K is irrelevant when you have a dataset of 12K