Implement a model similar to "RNNsearch"

Hi there ,
I want to implement a Bangla-> English MT system . I try to follow mainly this papaer for my implmentation(https://arxiv.org/pdf/1409.0473.pdf) . But to be honest , it is very difficult for me to grasp the overall paper for an accurate implementation . I am taking help from this site (Train — OpenNMT-py documentation) as well . After days of studying , I have created a .yaml file for Bidirectional LSTM based model with attention . The file looks:

save_data: run2/example
src_vocab: run2/example.vocab.src
tgt_vocab: run2/example.vocab.tgt

overwrite: False
data:
corpus_1:
path_src: bpe.train.bn
path_tgt: bpe.train.en
valid:
path_src: bpe.valid.bn
path_tgt: bpe.valid.en
save_model: run2/model
save_checkpoint_steps: 10000
keep_checkpoint: 10
seed: 3435
train_steps: 100000
valid_steps: 10000
report_every: 100

encoder_type: brnn
decoder_type: rnn
word_vec_size: 620
rnn_size: 1000
layers: 2

optim: adadelta
learning_rate: 1
#adagrad_accumulator_init: 0.1
max_grad_norm: 2

batch_size: 80
dropout: 0.0

copy_attn: ‘true’
global_attention: mlp
reuse_copy_attn: ‘true’
bridge: ‘true’

world_size: 1
gpu_ranks:

  • 0

I am not trying to exactly replicate this paper (https://arxiv.org/pdf/1409.0473.pdf) but almost similar to this. So kindly let me know whether my .yaml file will be able to generate a valid model.

PS: Though I find this post The WMT14 English-French result on the Opennmt-py is similar to me , i am asking it again to make it over confirm .Please don’t take it otherwise.

I hope i am clear enough
Thanks beforehand,
Argha