OpenNMT Forum

Training with low GPU memory

I have GPU with 4 GB memory.
Which options should I use to train my model?
I got this error after few hours of training:

RuntimeError: CUDA out of memory. Tried to allocate 734.00 MiB (GPU 0; 3.95 GiB total capacity; 2.21GiB already allocated; 317.06 MiB free; 3.00 GiB reserved in total by PyTorch)

It worked well despite this errors for a while, but unfortunately stops.

I know that I have to play with batchsize and accumsize, but how?

You can train for free on Google Colab with enough GPU,
Here is an English Russian netbook as an example:

That’s a great tip, @Nart. Thanks! My own on-premise 11GB GPU is overworked as it is :slight_smile:

1 Like

@kargintima check out these options on this page:
-batch_size, -accum_count
Training models