Thanks for your reply @guillaumekln. So, the below quote from the opennmt.net FAQ refers to system RAM?
While in theory you can train on any machine; in practice for all but trivally small data sets you will need a GPU that supports CUDA if you want training to finish in a reasonable amount of time. For medium-size models you will need at least 4GB; for full-size state-of-the-art models 8-12GB is recommend.
If it does, then it should be a bit clearer because now it seems it refers to GPU memory.