Differences token and sents at batch_type

Hello,
I’d been train transformer-type translator well with token option at batch_type config options,
but I changed option (token to sents) then started train command, I got much lower accuracy model than when I used token option

Below are my onmt_train options
by any chance, Is there an options should I change?

python3 train.py -data data/${domain}/pt/${domain}
-save_model data/${domain}/models/model
-layers 6
-rnn_size 512
-word_vec_size 512
-transformer_ff 2048
-heads 8
-encoder_type transformer
-decoder_type transformer
-position_encoding
-train_steps 220000
-max_generator_batches 2
-dropout 0.1
-batch_size 16
-batch_type sents
-normalization sents
-accum_count 2
-optim adam
-adam_beta2 0.998
-decay_method noam
-warmup_steps 8000
-learning_rate 2
-max_grad_norm 0
-param_init 0
-param_init_glorot
-label_smoothing 0.1
-valid_steps 5000
-save_checkpoint_steps 5000
-log_file data/${domain}/log/trn.date +%y%m%d_%H%M%S.log
-log_file_level INFO
-exp data/${domain}/exp.txt
-tensorboard
-tensorboard_log_dir runs/onmt
-world_size 1
-gpu_ranks 0

I don’t see any obvious issues with your config. But, why switching from tokens batch to sentence batch?

Hmm, I see…
What I switching from batch to sentence is purpose for comparison to other model using sentence batch
But, the model is made in OpenNMT-tf and applied sentence unit batch
So. I’m just try switching under conditions other confings are same
Thank you your reply