For SMT system, it’s important to normalize the number, date, time and url into unique representation.
But for NMT, I trained the model with and without data normalization between English and French. I didn’t see a big improvement in BLEU score or real test.
In the live NMT system (like Systran or Google), data normalization is still applied ? or they use some better way to do the normalization ?
The main problem with NMT is the fact that the vocab is limited. If you don’t put numbers and dates in the vocab, it’s similar to have a normalisation, since they will be considered as unknown (single token <unk>). If you put numbers and dates in the vocab, if your text is full of them, you will use a large part of your vocab, just for them, while a lot of real words will be considered a unknown.
Data normalization is very important as it makes the data analyses very easy. But unfortunately, I am facing serious problems in the same. If anyone is available with the correct procedure for data normalisation, please share it with us here.
So, my question is, how did you build your vocab in the case where you DON’T replace them by a code ? Did you put the numbers inside the vocab ? If yes, you certainly got less real words in it.
that’s similar to my result of experiment, so I just wonder that it’s not really important to do normalization for date and number before building vocab for NMT
You may try with richer normalisation. For example, rather than replacing 123.45 by a poor single $num$, or UNK, replace it by 888.88, that will keep an informative precision about the way it’s formatted.
to add up (a bit late) on this thread: yes, entity normalisation is important and even if you cannot except jump in your score, it will make a big difference for your users. For dealing with that, we have introduced monolingual preprocessing hooks and protected sequence to seamless deal with them.
In short - the process is the following:
define a mpreprocessing hook that will locate your favorite entities and annotate them with protected sequence markers - typically for uri:
check-out http://myurl.com/1234!
transform that into:
check out ⦅URL:http://myurl.com/1234⦆!
Note that there are 2 fields in the protected sequence separated by this strange : character (it is not :):
the entity name URL
the actual value http://myurl.com/1234
This notation turns automatically the entity as a unique ⦅URL⦆ vocab, while the second field (the actual value), is used in the detokenization within inference to substitute the actual value.
Of course, you can also perform preprocessing outside of the OpenNMT code (i.e. without a hook), but defining it as a hook guarantees you that inference and training and identical, and you don’t need to add additional preprocessing layer in the inference code.
Hello @tyahmed, OpenNMT-py is agnostic regarding the tokenization, so you can use the same protection mechanism.
However, afaik there the lexical constraint mechanism is not implemented so you will not benefit from placeholder unicity generating, nor the actual pairing of source and target entities.
Okay, thanks for the explanation. So the only way, with pytorch, is to encode the named entities with placeholders, train the model then post-process the data after the translation? (Post-processing using attention weights to find the corresponding source token for each placeholder in the target)
Hi,
I have tried to protect some entities like named entities, urls with the method mentioned in this post ⦅URL:http://myurl.com/1234⦆. It seems the notation is working for the tokenization and BPE processes. However, the model tries to translate ((URL)) to a number. For instance,