I have been working with the example Image to LaTex model. I trained the model and it works great. However, its memory footprint when I run it in translation mode on my NVIDIA 12GB GPU is enormous. When running in translation mode on my local laptop, which does not have a GPU, the memory usage for the process never exceeds 4GB. However, when I run it on our server which is equipped with the GPU, the GPU runs out of memory at very low batch sizes (anything greater than 5 2-MegaPixel images in a batch causes memory error), so the process seems to be using 2-3 times the memory it uses on my laptop without any GPU being involved. Is this normal? Does the memory usage that I am mentioning seem reasonable for this model, or does it seem like something is configured incorrectly.