Training Romance Multi-Way model

I did that script.
Yes there is a typo on that -nparallel parameter.
I will fix it.
The Bleu score is not as high as in the forum post because I took a smaller network to make the training fast.

If you want to replicate the bleu score from the post you need to change the network size. but then you will have to wait longer …

So,can i get the script result about the bleu ?

Well there 20 different scores …
for instance:
FR-ES: 30.40
ES-FR: 29.14
PT-RO: 25.14
PT-ES: 33.32

Hi,
one naive question, how to test with BPE version NMT model? Is it like this: first BPE the test src and tgt sentences like training data, then translated with test src subword sentences, and use BLEU script to compare tgt subword sentences and translated subword sentences? But it seems not right to compare subword units, since BLEU should be used to words instead of subwords.

scoring is done at the word level.

check the recipe to understand how it’s done.

https://github.com/OpenNMT/Recipes

I will post a commit on this recipe because I used a detokenized bleu score vs the tuto above where scores are calculated on tokenized output.

Doing the calculation on tokenized output I get about the same scores as the tuto with a smaller network:

test-esfr_multibleu.txt:BLEU = 32.78, 61.8/41.2/29.8/22.1 (BP=0.911, ratio=0.915, hyp_len=15036, ref_len=16436)
test-esit_multibleu.txt:BLEU = 28.64, 58.2/36.5/25.1/17.8 (BP=0.918, ratio=0.921, hyp_len=13906, ref_len=15091)
test-espt_multibleu.txt:BLEU = 34.52, 64.5/43.2/31.5/23.5 (BP=0.911, ratio=0.914, hyp_len=13004, ref_len=14221)
test-esro_multibleu.txt:BLEU = 27.21, 57.2/34.9/23.9/16.8 (BP=0.909, ratio=0.913, hyp_len=13279, ref_len=14550)
test-fres_multibleu.txt:BLEU = 32.09, 62.8/40.8/28.8/20.7 (BP=0.912, ratio=0.916, hyp_len=13762, ref_len=15027)
test-frit_multibleu.txt:BLEU = 27.34, 57.3/35.3/24.0/16.6 (BP=0.912, ratio=0.916, hyp_len=13509, ref_len=14751)
test-frpt_multibleu.txt:BLEU = 30.27, 61.1/38.6/26.9/19.0 (BP=0.915, ratio=0.918, hyp_len=13148, ref_len=14323)
test-frro_multibleu.txt:BLEU = 25.40, 55.1/32.5/21.6/14.9 (BP=0.922, ratio=0.924, hyp_len=13116, ref_len=14188)
test-ites_multibleu.txt:BLEU = 30.11, 61.6/39.2/27.3/19.4 (BP=0.896, ratio=0.901, hyp_len=13241, ref_len=14698)
test-itfr_multibleu.txt:BLEU = 31.57, 60.5/39.7/28.9/21.5 (BP=0.902, ratio=0.907, hyp_len=14686, ref_len=16193)
test-itpt_multibleu.txt:BLEU = 28.02, 60.2/36.9/24.8/17.0 (BP=0.900, ratio=0.905, hyp_len=13700, ref_len=15137)
test-itro_multibleu.txt:BLEU = 23.82, 53.8/31.2/20.7/14.0 (BP=0.903, ratio=0.907, hyp_len=13265, ref_len=14623)
test-ptes_multibleu.txt:BLEU = 35.34, 65.3/43.7/31.7/23.2 (BP=0.928, ratio=0.931, hyp_len=13751, ref_len=14772)
test-ptfr_multibleu.txt:BLEU = 34.03, 63.3/42.8/31.6/23.8 (BP=0.900, ratio=0.905, hyp_len=14882, ref_len=16451)
test-ptit_multibleu.txt:BLEU = 28.18, 57.2/35.8/24.6/17.2 (BP=0.924, ratio=0.927, hyp_len=13547, ref_len=14618)
test-ptro_multibleu.txt:BLEU = 28.54, 57.8/36.1/24.8/17.3 (BP=0.927, ratio=0.930, hyp_len=13536, ref_len=14561)
test-roes_multibleu.txt:BLEU = 32.67, 63.8/41.6/29.6/21.4 (BP=0.907, ratio=0.911, hyp_len=13592, ref_len=14914)
test-rofr_multibleu.txt:BLEU = 33.01, 61.6/41.1/30.0/22.3 (BP=0.916, ratio=0.919, hyp_len=15274, ref_len=16622)
test-roit_multibleu.txt:BLEU = 26.97, 56.5/35.0/23.9/16.9 (BP=0.902, ratio=0.907, hyp_len=13590, ref_len=14986)
test-ropt_multibleu.txt:BLEU = 30.33, 61.7/39.1/27.0/18.9 (BP=0.911, ratio=0.914, hyp_len=13790, ref_len=15082)

I ran with your script, but the Perplexity suddenly began to grow, and then I stopped and ran -continue, but it soon became big again.Do you know the reasons?

Is there an easy way to explore a pretrained model – for example getting the model configuration (number of layers for encoder, decoder, bidirectional or not) as well as weights of all the Linear layers for Encoder, Decoder and Attention? Thanks for the help in advance.

You may to take a look at the release_model.lua script.

Model configurations can be displayed with a simple:

print(checkpoint.options)

and the function releaseModel traverses parts of model (e.g. the encoder). With some print statements you should be able to get a sense on what is going on.

Hello newbie here, is this method doable in OpenNMT-TF in Windows 7 x64 in some way? I am working to train a 5-way NMT model with BPE. I already installed OpenNMT-tf and Tensorflow. Also, I was able to train an english to german model to test if opennmt is properly installed in my system and monitored it using Tensorboard. But I am currently stuck in tokenization when using OpenNMTTokenizer. I am experiencing error saying "--tokenizer: invalid choice: 'OpenNMTTokenizer’. I compiled OpenNMT-Tokenizer without boost, gtest and sentencepiece. For the meantime, I am using Moses’ tokenizer.perl. Thank you. :slight_smile:

Hello,

On Windows, you should manually install the Python wrapper of the tokenizer to use it within OpenNMT-tf. See:

However, it might simpler to install Boost and compile the Tokenizer with its clients (cli/tokenize and cli/detokenize). Then you can prepare the corpus before feeding them to OpenNMT-tf.

I’ve already compiled boost in my system with a toolset=gcc. However cmake could not find boost even i set the root and lib using this command:
cmake -DBOOST_ROOT=C:\boost-install\include\boost-1_66\boost -DBOOST_LIBRARYDIR=C:\boost-install\lib -G "MinGW Makefiles" -DCMAKE_BUILD_TYPE=Release

My ...\boost-1_66\boost contains a bunch of folders and .hpp files while ...\lib folder contains .a files.

Try with:

-DBOOST_INCLUDEDIR=C:\boost-install\include\boost-1_66 -DBOOST_LIBRARYDIR=C:\boost-install\lib

Hello, I managed to compile the tokenizer and detokenizer with boost using MinGW Destro. (MinGW with a lot of built-in libraries including boost). Now, I do have question related to this topic. I have 4 parallel corpora that are translated and aligned to each other (ie. train.{en,tgl,bik,ceb}) unlike the dataset used in this thread which has individual alignment/data for each pair (ie. train-{src}{tgt}.{es,fr,it,pt,ro}). How will I add language tokens to my data in this kind of case? Thank you. :slight_smile:

Hello,

Good question! In the set-up proposed in this tutorial, you do need to specify the target language in the source sentence - that will be used to trigger the target language decoding. And you can do the same here, just sampling pair source/target and annotating them (you do need in any case to sample pairs for the training since you cannot train the 4 translation simultaneously.

However, another approach would be to inject the target language token as the forced-first token of the decoded sentence - this will make your encoder totally agnostic of the target language. If you want I can give you some entrypoint in the code for doing such experiment.

Best
Jean

Hi Jean, thank you for the reply.

I am using OpenNMT-tf. I made a script that duplicates 4 corpus (train.{eng,tgl,bik,ceb}) and name it with (train.engbik.eng, train.engceb.eng, train.engtgl.eng … train.tgleng.tgl). I tokenized the training data without additional parameters, trained a bpe model with a size of 32000 using the tokenized training data, tokenized the valid,test, and training data using parameters: case_feature, joiner_annotate, and bpe_model. Accordingly, I added language token “s//__opt_src_${src} __opt_tgt_${tgt} /” to test, valid and train. After preparing the data, I build 2 vocabularies, one for source (train-multi.src.tok) and another for the target (train-multi.tgt.tok) with a size of 50000, then I started to train the model using 2 layers, 512 RNN size, bidirectional RNN encoder, Attention RNN decoder and 600 word embedding size. On the 40,2000th step I tried to test it and translate a tokenized(.tok) test file (test-engtgl.eng) with a source language (english) and target (tagalog). However, the translation output is the same as the language of the test file but the language tokens were replaced as “<unk><unk>”. Is this completely normal?

Data, configuration files, and scripts that I used can be found here. Thank you.

Do you have detailed steps/scripts to do the “extraction of bitext” and “alignment”. Curious if you used any third party alignment tools ? I’m trying to replicate this for other language families.

@jean.senellart
@guillaumekln

Appreciate any help. Thanks !

I thinks it’s just about looking up each French sentence and checking if it is also present in the FRES, FRIT, FRPT, and FRRO corpus. If yes, then 5 parallel translations are found.

Thanks @guillaumekln That’s makes it way simpler.
I’m guessing if you did not have any parallel data then you can use something like Bitext Mining to get close to a parallel corpus.

Here’s my 2 cents on replicating this in OpenNMT-tf with results.
image

Training :
Activate the environment – (optional)

source activate tensorflow_p36

Building the vocabulary from tokenized training files –

onmt-build-vocab --size 50000 --save_vocab /home/ubuntu/multi-lingual_modeling/multi_tokenized/src_vocab_50k.txt /home/ubuntu/multi-lingual_modeling/multi_tokenized/train-multi-src.tok

onmt-build-vocab --size 50000 --save_vocab /home/ubuntu/multi-lingual_modeling/multi_tokenized/tgt_vocab_50k.txt /home/ubuntu/multi-lingual_modeling/multi_tokenized/train-multi-tgt.tok

Run the training (Transformer model) –

nohup onmt-main train_and_eval --model_type Transformer --config config_run_da_nfpa.yml --auto_config --num_gpus 1 2&gt;&amp;1 | tee multi_rnn_testrun1.log

Inference :
cp /home/ubuntu/multi-lingual_modeling/multi_tokenized/test-multi-*.tok /home/ubuntu/multi-lingual_modeling/multi_tokenized/inference_data/
cd /home/ubuntu/multi-lingual_modeling/multi_tokenized/inference_data

Split and rename all inference language files –
split -l 500 test-multi-src.tok
mv xaa test.src.es.fr.es
mv xab test.src.es.it.es
... (Rename all the remaining segments correspondingly)
split -l 500 test-multi-tgt.tok
mv xab test.tgt.es.fr.fr
mv xab test.tgt.es.it.it
... (Rename all the remaining segments correspondingly)

Run the below two commands for all the language pairs (20 times) –

onmt-main infer --config multi-lingual_modeling/config_run_da_nfpa.yml --features_file /home/ubuntu/multi-lingual_modeling/multi_tokenized/inference_data/test.src.es.it.es --predictions_file /home/ubuntu/multi-lingual_modeling/multi_tokenized/inference_data/infer_result.src.es.it.it --auto_config

perl OpenNMT-tf/third_party/multi-bleu.perl /home/ubuntu/multi-lingual_modeling/multi_tokenized/inference_data/infer_result.src.es.it.it < /home/ubuntu/multi-lingual_modeling/multi_tokenized/inference_data/test.tgt.es.it.it

Config File (config_run_da_nfpa.yml):

model_dir: /home/ubuntu/multi-lingual_modeling/opennmt-tf_run1
data:
train_features_file: /home/ubuntu/multi-lingual_modeling/multi_tokenized/train-multi-src.tok
train_labels_file: /home/ubuntu/multi-lingual_modeling/multi_tokenized/train-multi-tgt.tok
eval_features_file: /home/ubuntu/multi-lingual_modeling/multi_tokenized/valid-multi-src.tok
eval_labels_file: /home/ubuntu/multi-lingual_modeling/multi_tokenized/valid-multi-tgt.tok
source_words_vocabulary: /home/ubuntu/multi-lingual_modeling/multi_tokenized/src_vocab_50k.txt
target_words_vocabulary: /home/ubuntu/multi-lingual_modeling/multi_tokenized/tgt_vocab_50k.txt

params:
replace_unknown_target: true

train:
save_checkpoints_steps: 1000
keep_checkpoint_max: 3
save_summary_steps: 1000
train_steps: 5000
batch_size: 3072

eval:
eval_delay: 1800
external_evaluators: [BLEU,BLEU-detok]

Thanks !

4 Likes