OpenNMT Forum

Translating huge number of sentences


(Volodymyr Kepsha) #1

Hello !
I have a concern about translation (en-de). I need to translate a huge number of sentences, more than 100k. There is no problem translating wmt_14, wmt_17. However, when it comes to a larger dataset, I start having a problem with GPU consumption.

This is my script:
python -gpu 0 -beam_size 10 -batch_size 16 -model $1 -src $2 -tgt $3 -replace_unk -verbose -output translated/$4

Initial consumption was arond 1,5-2 Gb, with batch 16, but after 3-4k sentence I get the consuption around 5 Gb and after 10k it is a bit more than 8 Gb. As I understand, there can be some GPU consumption fluctuation due to the length of the sentences in a batch, but I have constant increasing. I had an idea to crop the dataset into pieces of 3k, and then translate them, but it’s not that convenient. (I’m translating europarl )
Do you know what could be a problem or how to fix it ?

(Etienne Monneret) #2

Try to remove the longer sentences from your data.

(Guillaume Klein) #3


Please note that PyTorch uses a memory caching mechanism so the reported memory usage is different that the memory actually used. See:

Also try translating with -fast.