Chinese to english translation many unknown words


(bart van halder) #1

I’m trying to train a model for unsupervised translation from Chinese(simplified) to English. I collected a corpus of 28M sentences and tokenized the Chinese side by inserting spaces between words. But after training this model it seems to not ‘remember’ many words. Translating a random sample from a Chinese news site often works pretty well, but when I try to translate more random text my model often returns a high percentage of unks or it starts to repeat one or 2 words on the output.

Am I doing something massively wrong or is there a way to increase the translation memory of a model? I would greatly appreciate any pointers.


Why the words in training source file are still translated as unknown?
Lots of unknowns, can't increase vocabulary size
(Guillaume Klein) #2

Do you tokenize the test data the same way you tokenized the training data?


(bart van halder) #3

Yes, and i use the same tokenizing code to tokenize before translation.


(Guillaume Klein) #4

Usually this means:

  • the model is not trained enough (not enough training data, too few iterations, too small model, etc.)
  • or the test data contains a lot of out of vocabulary words (out of domain data, different tokenization, etc.)

How to solve the problem about too much "unk" in Enlish-Chinese translate?
(bart van halder) #5

Thank you, I think it might be a problem with my validation data. Before I start another 2 week training process, do you have any advice on other knobs to tweak like extra layers etcetera?


(Guillaume Klein) #6

People usually use as baselines models with 4 layers, 1000 as RNN size and a bidirectional encoder.


(LakersChampionship) #7

Hello
I am doing the research about NMT recently. Could you tell me where are you find the ZH-EN training dataset?


(Erik Chan) #8

Same here, also interested in training dataset