I tried to follow the quickstart example at http://opennmt.net/OpenNMT/quickstart/. Steps 1 & 2 succeeded and got I my LSTM model trained. When I try to translate using this model, I invariably get a TypeError as the batch format seems to be off from what is expected in PyTorch’s description.
> python3.5 translate.py -model demo-mode.pt -src data/src-test.txt -output pred.txt -replace_unk -verbose -gpu 7
> WARNING: -batch_size isn't supported currently, we set it to 1 for now!
> Loading model parameters.
> Traceback (most recent call last):
> File "translate.py", line 131, in <module>
> main()
> File "translate.py", line 68, in main
> for batch in test_data:
> File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/iterator.py", line 178, in __iter__
> self.train)
> File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/batch.py", line 22, in __init__
> setattr(self, name, field.process(batch, device=device, train=train))
> File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/field.py", line 184, in process
> tensor = self.numericalize(padded, device=device, train=train)
> File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/field.py", line 296, in numericalize
> arr = [numericalization_func(x) for x in arr]
> File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/field.py", line 296, in <listcomp>
> arr = [numericalization_func(x) for x in arr]
> TypeError: float() argument must be a string or a number, not 'torch.LongTensor'