Thank you, Guillaume! In fact I already set up lua succesfully before this. That all worked, but training took 59 hours (4,5 million word corpus). So I wanted to try two things at once: get more insight in what is actually happening in the network using TensorBoard and take advantage of the extra GPU power by using a dual-boot system instead of VirtualBox. After 8 hours of training though (everything is working nicely now in Tensorflow, thanks to your help!) it seems the GPU’s won’t work miracles. They are being recognized and used, but considering the quality of the translations of the current model, I still have a long way to go.