Example Library is obsolete and throws exception on CPU

I am trying to run this example only for the translate part:

http://opennmt.net/OpenNMT-py/Library.html

It looks like it is quite old, I hat to change onmt.io to onmt.inputters, however, I still get an exception and it looks like it is some problem with CPU. When I am running translate.py on the same machine, in the same setting it runs. Once I install it as a library I get this:

import torch
import onmt.translate

vocab = dict(torch.load(“train_paraphrase.vocab.pt”))
data = torch.load(“train_paraphrase.train.1.pt”)
valid_data = torch.load(“train_paraphrase.valid.1.pt”)
data.load_fields(vocab)
valid_data.load_fields(vocab)
data.examples = data.examples[:100]

translator = onmt.translate.Translator(beam_size=10,
fields=data.fields,
model=“averaged-10-epoch.pt”,
verbose=True,replace_unk=True,gpu=False)
builder = onmt.translate.TranslationBuilder(data=valid_data, fields=data.fields,replace_unk=True)

valid_iter = onmt.inputters.OrderedIterator(
dataset=valid_data, batch_size=10,
train=False)

valid_data.src_vocabs
for batch in valid_iter:
trans_batch = translator.translate_batch(batch=batch, data=valid_data)
translations = builder.from_batch(trans_batch)
for trans in translations:
print(trans.log(0))
break

/Users/_/PycharmProjects/Paraphraser/venv/lib/python3.6/site-packages/torchtext/data/field.py:321: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  return Variable(arr, volatile=not train), lengths
/Users/_/PycharmProjects/Paraphraser/venv/lib/python3.6/site-packages/torchtext/data/field.py:322: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  return Variable(arr, volatile=not train)
Traceback (most recent call last):
  File "/Users/_/PycharmProjects/Paraphraser/translation_paraphrasing/paraphrasors/open_nmt_translate.py", line 26, in <module>
    trans_batch = translator.translate_batch(batch=batch, data=valid_data)
  File "/Users/_/PycharmProjects/Paraphraser/venv/lib/python3.6/site-packages/OpenNMT_py-0.1-py3.6.egg/onmt/translate/translator.py", line 292, in translate_batch
    for __ in range(batch_size)]
  File "/Users/_/PycharmProjects/Paraphraser/venv/lib/python3.6/site-packages/OpenNMT_py-0.1-py3.6.egg/onmt/translate/translator.py", line 292, in <listcomp>
    for __ in range(batch_size)]
  File "/Users/_/PycharmProjects/Paraphraser/venv/lib/python3.6/site-packages/OpenNMT_py-0.1-py3.6.egg/onmt/translate/beam.py", line 32, in __init__
    self.scores = self.tt.FloatTensor(size).zero_()
TypeError: type torch.cuda.FloatTensor not available

------------UPDATE-----------
Half a day suffered, posted here in 5 minutes found the problem. It should be gpu=-1.

I have received another errors in:

def report_func(*args):
stats = args[-1]
stats.output(args[0], args[1], 10, 0)
return stats

I tried to change the old code to be compatible with new methods of onmt , however, I still have some errors in this part. Does anybody know what is wrong with report_func and why they use it in this example?
Does anybody have an update of library:example code?

Hello @fahimeh, can you describe a bit more the problem you have?