Training multiple shards not working for the pytorch version?

Hello,
I’m training a large dataset that has been divided into several shards using preprocess.py
But after checking the log file after training 200000 steps, it seems only the first shard has been loaded/trained. PS: I’m using the pytorch version of opennmt.

Does the pytorch version support multiple shards? How can I train all the shards?

Thank you.

Sorry please delete this. it actually works.