Is it not possible to run a GPU trained model on a CPU only machine? I have specified the -gpuid 0 option in the command line for rest_translation_server.lua but after loading the model I get the following error message: unknown Torch class <torch.CudaTensor>. I guess I’m not the only one to encounter this.
See the documentation:
Ah, yes. The penny has dropped as we say in the UK. I need to do the release on the GPU machine before running the released model on the CPU machine. Thanks.
All that’s working nicely now. Thanks again.
I seem to have an issue running a released model on a CPU machine. Adding a print command to rest_translation_server.lua shows that the server is receiving the request string. But it does not seem to be processed any further and no inference is output. I enclose a screenshot:
The unreleased model works perfectly on a GPU machine. Any ideas, please?
This was resolved by copying across the version of rest_translation_server I had downloaded on 7 March (ONMT v05.50) and replacing the version downloaded with v0.60. I now get the inference results shot back into my client. When I get a moment I’ll track down the difference between the two - I think I know what it is but I won’t guess