Getting attention weights

This was asked in a previous question from 6 days ago, “How to vizualize attention weights”, but i haven’t been able to find an answer anywhere. When trying to get the attention weights, I get the following error before getting very far.

AttributeError: 'dict' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.

Here is the code.

import onmt
import onmt.inputters
import onmt.translate
import onmt.model_builder
from collections import namedtuple
Opt = namedtuple('Opt', ['models', 'data_type', 'reuse_copy_attn', "gpu"])

opt = Opt("/home/shankrenn/Desktop/hidden-att/model/hidden-2/seed-0/LSTMlang1_step_400.pt", "text",False,0)
fields, model, model_opt= onmt.model_builder.load_test_model(opt,{"reuse_copy_attn":False})

And here is the trace.

Traceback (most recent call last):

  File "<ipython-input-51-94c1f45c429f>", line 1, in <module>
    runfile('/home/shankrenn/Desktop/hidden-att/graph_hidden_exp.py', wdir='/home/shankrenn/Desktop/hidden-att')

  File "/home/shankrenn/anaconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 786, in runfile
    execfile(filename, namespace)

  File "/home/shankrenn/anaconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "/home/shankrenn/Desktop/hidden-att/graph_hidden_exp.py", line 46, in <module>
    fields, model, model_opt= onmt.model_builder.load_test_model(opt,{"reuse_copy_attn":False})

  File "../../Documents/NMT/OpenNMT-py/onmt/model_builder.py", line 85, in load_test_model
    map_location=lambda storage, loc: storage)

  File "/home/shankrenn/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 387, in load
    return _load(f, map_location, pickle_module, **pickle_load_args)

  File "/home/shankrenn/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 549, in _load
    _check_seekable(f)

  File "/home/shankrenn/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 194, in _check_seekable
    raise_err_msg(["seek", "tell"], e)

  File "/home/shankrenn/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 187, in raise_err_msg
    raise type(e)(msg)

AttributeError: 'dict' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.

I am trying to get attention weights for a model trained using opennmt-py. The most recent discussion I have found is at http://forum.opennmt.net/t/how-to-visualize-attention-weights/2889. I have managed to do some workarounds to get the first part to work, but when I reach

data = onmt.inputters.build_dataset(fields, "text", None, use_filter_pred=False, src_path='/home/Desktop/exp-training-window-10hidden-att/raw/window-50/test_src_lang1.csv')

It refers to a non-existing function.

module 'onmt.inputters' has no attribute 'build_dataset'

What is the alternative in the present version?

Please Try

python3 setup.py build
python3 setup.py install

I get the exact same error?

Any advice here @vince62s @francoishernandez?

it changed here:

Sorry holiday, won’t have time to explain more.

Thank you for the reply! Does anyone know how to use the inputters.Dataset, when only giving a directory to the data as in build_dataset? I am trying to use it as:

data = onmt.inputters.Dataset(fields, "text", None, filter_pred=False,sort_key=None, dirs='/home/Desktop/exp-training-window-10hidden-att/raw/window-50/test_src_lang1.csv')

But get

AttributeError: 'str' object has no attribute 'read'

Because Datseset needs the following which I am not sure how to interpret:

data (Iterable[Tuple[str, Any]]): (name, ``data_arg``) pairs
            where ``data_arg`` is passed to the ``read()`` method of the
            reader in ``readers`` at that position. (See the reader object for
            details on the ``Any`` type.)

What exactly are you trying to accomplish? Does your inference run properly? If so, you might just adapt the inference codepath to dump attention weights along with predictions. I think this is partly working via the -attn_debug opt, but you might want to adapt it depending on your model architecture.

That might be what I am looking for, all I want is to be able to output the weights along with the predictions and the source. I see that most of what I wanted to do is contained in this snippet, will go and try it out, thank you.