How to use multi-gpu while training

Hi, I noticed that the issue about multi-gpu has been marked as finished features:

Multi-gpu PyTorch version?

However, I couldn’t use it while setting the option to -gpuid 0 1, in fact, I found some codes in

if len(opt.gpuid) > 1:
    sys.stderr.write("Sorry, multigpu isn't supported yet, coming soon!\n")

Is that a bug or the multi-gpu features still not available in opennmt-py?

Thanks. , maybe you can try the lua version if you want to try the multi-gpu function.

btw, pytorch 0.4 has supported the multi-gpu funcition, you can try to add this function to this project.