Text Summarization on Gigaword and ROUGE Scoring

Indeed.


(from Rush et al)

1 Like

Hi,

when I try to use file2rouge to test the rouge score, it caused the error:

  File "files2rouge/files2rouge.py", line 251
    print(*args, **kwargs, file=saveto)
                         ^
SyntaxError: invalid syntax

I have python 2.7.13 within Anaconda 4.4.0.
Could you please have a look? Thanks.

Indeed. I just pushed new commits in both pythonrouge and files2rouge.
You must then pull & run setup.py again for both.

My python 2.7 files2rouge now works.

I am using the pre-trained model given by twang. I am facing issues in the implementation of the model.

My Ubuntu 16.04 server doesn’t have a GPU.

Issue:

$ python translate.py -model textsum_acc_51.38_ppl_12.59_e13.pt -src …/sumdata/Giga/input.txt

Traceback (most recent call last):

    ImportError: No module named Dict

I installed a library “dict” (i couldn’t find any library named “Dict”) and couldn’t solve the problem.

python version : 2.7.12

torch versions:
torch==0.2.0.post3
torchtext==0.2.0a0

I am new to python and not able to crack this. Any leads on this?

I must say I never used OpenNMT-py with Python 2.7.

I would first recommend to try using python 3.x.

Since now there is a Tensorflow wrapper: OpenNMT-tf: a new alternative
It would be interesting to provide a tutorial using this alternative version.

Hi all,

Thanks for helpful instructions. I trained he model using a gpu, it works fine on sample data in Giga folder, although for any other articles that are try it doesn’t generate any output or just a word or two. Any suggestion?

I’m trying these two short articles:

british intelligence sources report that the group of approximately five somali pirates who have captured the mv Tanya off the somalian coast call themselves the waterways protection regional guard
sources confirmed that diamonds were shipped from yemen to moscow by georgiy giunter on december.

Output: protection is regional guard protection regional guard

giunter is a dealer in jewelry and precious stones who does business in the middle east and russia. giunter is a money launderer in addition to his legitimate gemstone work.

Output: sent from yemen to moscow

i tried to generate translation with OpenNMT-py using @twang pre-trained model:

python3 translate.py -model textsum_acc_51.38_ppl_12.59_e13.pt -src ../data/bitcoin-tosum.txt

However i got this result:

um_acc_51.38_ppl_12.59_e13.pt -src ../data/bitcoin-tosum.txt

Traceback (most recent call last):
File “translate.py”, line 116, in
main()
File “translate.py”, line 39, in main
onmt.ModelConstructor.load_test_model(opt, dummy_opt.dict)
File “/Users/ifadardin/Documents/Python/OpenNMT-py-master/onmt/ModelConstructor.py”, line 114, in load_test_model
map_location=lambda storage, loc: storage)
File “/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/torch/serialization.py”, line 261, in load
return _load(f, map_location, pickle_module)
File “/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/torch/serialization.py”, line 409, in _load
result = unpickler.load()
ImportError: No module named ‘onmt.Dict’

I checked in GitHub it might be because onmt.Dict is eliminated on last summer update. Is there any work around here?

@SinaMohseni It’s not always easy to debug model’s behavior. Your case may be related to https://github.com/OpenNMT/OpenNMT-py/issues/457 i.e. we sometime need to force a minimum output size otherwise it stops too early. If it does not help I would suggest you to open an issue.


@Ifad As you said, it is occuring because the model has been trained with another OpenNMT-py version. It make sense to open an issue for this.

I had the same problem in OpenNMT-py, If you have a gpu you could try my trained model. it’s here.

I tried to release the model to be used for both GPU and CPU systems, but couldn’t find the release_model codes.

1 Like

I downloaded your model and copy-pasted the sentences you wrote. I’m not sure if you are actually running the translation from those very sentences but you shouldn’t. It must be tokenized before e.g. text in lower case (see “Tanya” in first sentence); separated punctuation (see “russia. giunter” instead of “russia . giunter”).

You may think that it’s just detail, but in fact, just by editing the input I got:

SENT 1: ('british', 'intelligence', 'sources', 'report', 'that', 'the', 'group', 'of', 'approximately', 'five', 'somali', 'pirates', 'who', 'have', 'captured', 'the', 'mv', 'tanya', 'off', 'the', 'somalian', 'coast', 'call', 'themselves', 'the', 'waterways', 'protection', 'regional', 'guard', 'sources', 'confirmed', 'that', 'diamonds', 'were', 'shipped', 'from', 'yemen', 'to', 'moscow', 'by', 'georgiy', 'giunter', 'on', 'december')
PRED 1: somali pirates send diamonds to moscow
PRED SCORE: -6.8853
SENT 2: ('giunter', 'is', 'a', 'dealer', 'in', 'jewelry', 'and', 'precious', 'stones', 'who', 'does', 'business', 'in', 'the', 'middle', 'east', 'and', 'russia', '.', 'giunter', 'is', 'a', 'money', 'launderer', 'in', 'addition', 'to', 'his', 'legitimate', 'gemstone', 'work', '.')
PRED 2: the <unk> of <unk>

We can see that the second prediction is pretty bad. My guess is that your input here is two sentences. This case does not occurs in the Gigaword training inputs, thus, this make sense that the model struggles, doesn’t it?

1 Like

Is there such a option in the Torch version to force a minimum output?
Even in the giga_0_pred.txt provided there are some short or empty sentences. I tested on the dataset for the DUC2004 task1 and the results are also bad. I also found that the larger the beam size, the shorter the average length of the output, which limits the ROUGE score(the recall metric).

It is now implemented in OpenNMT-py since #496.

I’m not sure about the Lua version, I haven’t found thing like this in the translate.lua options so probably not.

Yeah I saw that thread as well…but I guess the model needs to be trained again using the Python script? Or can the translate.py take the current model as argument and make predictions?

You don’t have to train again specifically for this feature.


There may be some incompatibility between old models and current state (depending on how old is your model), but not from this change.

Actually I was asking whether the translate.py script can take the model produced by train.lua since I haven’t used the Python version.

I just tested and it certainly doesn’t work.

Oh, no. Transferring from Lua to Python torch isn’t possible.

@SinaMohseni thanks man! your model works. is there any way to expand of the result? for example,

input:
the sense of smell , as marcel proust and his madeleine made clear , is intimately tied to feeling and memory , so it is perhaps not surprising that in schizophrenia , an illness that plays havoc with the emotional capacities of those who suffer from it , the sense of smell is impaired .

result:
the sense of smell

i think if i can elongate the result it will be better.

Thanks for taking a look! you are right, I didn’t tokenize it before.

I should say you are trying a super complicated and long sentence, maybe start with something shorter?

As @pltrdy replied to me, make sure to tokenize the input. Also, your input should be in a single-sentence format. Take a look at this issue for more information.

1 Like