Translation of Image2Text model dataset using OpenNMT-py Library

Hi there,

I am trying to translate my Image2Text dataset using OpenNMT-py Library( by dividing image test dataset into batches. But in the documentation, under “data” tile it is mentioned that we can only provide “text” as source input data. How will I be able to translate images in batches?
Any suggestions will be helpful.

Thank you in advance!

This task is not supported yet in v2. You can use the legacy version (v1.2), following this example for instance: Image to Text — OpenNMT-py documentation

Else, contributions are welcome if you want to implement this in v2.