failing with TypeError

I tried to follow the quickstart example at Steps 1 & 2 succeeded and got I my LSTM model trained. When I try to translate using this model, I invariably get a TypeError as the batch format seems to be off from what is expected in PyTorch’s description.

> python3.5 -model -src data/src-test.txt -output pred.txt -replace_unk -verbose -gpu 7
> WARNING: -batch_size isn't supported currently, we set it to 1 for now!
> Loading model parameters.
> Traceback (most recent call last):
>   File "", line 131, in <module>
>     main()
>   File "", line 68, in main
>     for batch in test_data:
>   File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/", line 178, in __iter__
>     self.train)
>   File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/", line 22, in __init__
>     setattr(self, name, field.process(batch, device=device, train=train))
>   File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/", line 184, in process
>     tensor = self.numericalize(padded, device=device, train=train)
>   File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/", line 296, in numericalize
>     arr = [numericalization_func(x) for x in arr]
>   File "/local/home/ecarsten/.local/lib/python3.5/site-packages/torchtext/data/", line 296, in <listcomp>
>     arr = [numericalization_func(x) for x in arr]
> TypeError: float() argument must be a string or a number, not 'torch.LongTensor'

Could you open an issue on GitHub in the OpenNMT-py repo?