Multi-Source Translation

Just added the multi-source Transformer architecture in OpenNMT-tf, starting with “serial” attention layers. Just require to define a Transformer model with parallel inputs:

Thank you :hugs:

I have some Qustion.

I am trying to experiment with Multi Source Transformer provided by OpenNMT by downloading WMT2019 APE dataset.

The structure of the data consists of train.mt, train.pe, train.src, val.mt, val.pe, and val.src.

The number of train is 15089 sentence val is 1 thousand sentences …

Question 1) How many vocabs should train and val have? I don’t know how many vocab I should pick a vocab about 15089 sentences …

Question 2) I am going to run OpenNMT/OpenNMT-tf/blob/master/config/models/multi_source_transformer.py, but I do not know how to write data.yml when i use multi_source_transformer.py.

Please help me

1 Like
  1. Do you mean the vocab size? It depends on the tokenization, there is no good answer I think.
  2. See http://opennmt.net/OpenNMT-tf/data.html#parallel-inputs

I make my .yml file like below

model_dir: run/

data:
train_features_file:
-./train/train.src
-./train/train.mt
train_labels_file: ./train/train.pe
eval_features_file:
-./dev/dev.src
-./dev/dev.mt
eval_labels_file: ./dev/dev.pe
source_words_vocabulary:
-./train/train.src.vocab
-./train/train.mt.vocab
target_words_vocabulary: ./train/train.pe.vocab

And command like below

onmt-main train --model …/config/models/multi_source_transformer.py --auto_config --config data.yml

Problem occur.

Traceback (most recent call last):
File “/usr/local/bin/onmt-main”, line 11, in
load_entry_point(‘OpenNMT-tf==1.23.0’, ‘console_scripts’, ‘onmt-main’)()
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/bin/main.py”, line 169, in main
hvd=hvd)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/runner.py”, line 96, in init
self.model.initialize(self.config[“data”])
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/models/model.py”, line 70, in initialize
self.examples_inputter.initialize(metadata)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/models/sequence_to_sequence.py”, line 377, in initialize
metadata, asset_dir=asset_dir, asset_prefix=asset_prefix)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/inputters/inputter.py”, line 559, in initialize
self.features_inputter.initialize(metadata, asset_prefix="source
")
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/inputters/inputter.py”, line 313, in initialize
inputter.initialize(metadata, asset_prefix="%s%d
" % (asset_prefix, i + 1))
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/inputters/text_inputter.py”, line 356, in initialize
metadata, asset_dir=asset_dir, asset_prefix=asset_prefix)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/inputters/text_inputter.py”, line 251, in initialize
self.vocabulary_file = metadata[self.vocabulary_file_key]
KeyError: ‘source_vocabulary_1’

Please Help me…

Look at the model definition https://github.com/OpenNMT/OpenNMT-tf/blob/master/config/models/multi_source_transformer.py and the vocabulary keys that are expected in the configuration.

I don’t know the meaning of the vocabulary keys.
is the meaning of the vocabulary path?

Each input layer declares the configuration entry it will lookup. If you read the model definition, you found that you should set the following data:

data:
  source_vocabulary_1:  # path to the vocabulary of the first source 
  source_vocabulary_2:  # path to the vocabulary of the second source 
  target_vocabulary: # path to the target vocabulary
1 Like

My data.yml

model_dir: run/

data:
source_vocabulary_1: ./train/train.src.vocab
source_vocabulary_2: ./train/train.mt.vocab
target_vocabulary: ./train/train.pe.vocab

train_features_file:
-./train/train.src
-./train/train.mt
-./train/train.pe
eval_features_file:
-./dev/dev.src
-./dev/dev.mt
-./dev/dev.pe

Howerver error is occur

Instructions for updating:
Colocations handled automatically by placer.
Traceback (most recent call last):
File “/usr/local/bin/onmt-main”, line 11, in
load_entry_point(‘OpenNMT-tf==1.23.0’, ‘console_scripts’, ‘onmt-main’)()
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/bin/main.py”, line 174, in main
runner.train(checkpoint_path=args.checkpoint_path)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/runner.py”, line 320, in train
train_spec.input_fn, hooks=train_spec.hooks, max_steps=train_spec.max_steps)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py”, line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py”, line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py”, line 1151, in _train_model_default
input_fn, model_fn_lib.ModeKeys.TRAIN))
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py”, line 992, in _get_features_and_labels_from_input_fn
self._call_input_fn(input_fn, mode))
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py”, line 1079, in _call_input_fn
return input_fn(**kwargs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/estimator.py”, line 124, in _fn
prefetch_buffer_size=prefetch_buffer_size)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/inputters/inputter.py”, line 670, in make_training_dataset
dataset_size = self.features_inputter.get_dataset_size(features_file)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/inputters/inputter.py”, line 379, in get_dataset_size
raise ValueError(“The number of data files must be the same as the number of inputters”)
ValueError: The number of data files must be the same as the number of inputters

Did you take a look at the model definition? It defines 2 inputs, not 3. You should edit it.

1 Like

Thanks I solve the problem…Thank you so much

It was running well but suddenly error occurred…

command

onmt-main train --model …/config/models/multi_source_transformer.py --auto_config --config data.yml --num_gpus 2

error

Caused by op ‘transformer/decoder_1/layer_1/multi_head/conv1d_1/conv1d/Conv2D’, defined at:
File “/usr/local/bin/onmt-main”, line 11, in
load_entry_point(‘OpenNMT-tf==1.23.0’, ‘console_scripts’, ‘onmt-main’)()
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/bin/main.py”, line 174, in main
runner.train(checkpoint_path=args.checkpoint_path)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/runner.py”, line 320, in train
train_spec.input_fn, hooks=train_spec.hooks, max_steps=train_spec.max_steps)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py”, line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py”, line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py”, line 1154, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow_estimator/python/estimator/estimator.py”, line 1112, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/estimator.py”, line 169, in _fn
_loss_op, local_model, features_shards, labels_shards, params, mode)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/utils/parallel.py”, line 151, in call
outputs.append(funs[i](*args[i], **kwargs[i]))
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/estimator.py”, line 238, in _loss_op
logits, _ = model(features, labels, params, mode)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/models/model.py”, line 88, in call
return self._call(features, labels, params, mode)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/models/sequence_to_sequence.py”, line 208, in _call
return_alignment_history=True)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/decoders/decoder.py”, line 176, in decode
memory_sequence_length=memory_sequence_length)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/decoders/self_attention_decoder.py”, line 229, in decode_from_inputs
memory_sequence_length=memory_sequence_length)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/decoders/self_attention_decoder.py”, line 185, in _self_attention_stack
return_attention=True)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/layers/transformer.py”, line 272, in multi_head_attention
keys, values = fused_projection(memory, num_units, num_outputs=2)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/layers/transformer.py”, line 136, in fused_projection
tf.layers.conv1d(inputs, num_units * num_outputs, 1), num_outputs, axis=2)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py”, line 324, in new_func
return func(*args, **kwargs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/layers/convolutional.py”, line 218, in conv1d
return layer.apply(inputs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/keras/engine/base_layer.py”, line 1227, in apply
return self.call(inputs, *args, **kwargs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/layers/base.py”, line 530, in call
outputs = super(Layer, self).call(inputs, *args, **kwargs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/keras/engine/base_layer.py”, line 554, in call
outputs = self.call(inputs, *args, **kwargs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/keras/layers/convolutional.py”, line 384, in call
return super(Conv1D, self).call(inputs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/keras/layers/convolutional.py”, line 194, in call
outputs = self._convolution_op(inputs, self.kernel)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py”, line 966, in call
return self.conv_op(inp, filter)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py”, line 591, in call
return self.call(inp, filter)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py”, line 208, in call
name=self.name)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py”, line 197, in _conv1d
name=name)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py”, line 574, in new_func
return func(*args, **kwargs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py”, line 574, in new_func
return func(*args, **kwargs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py”, line 3482, in conv1d
data_format=data_format)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_nn_ops.py”, line 1026, in conv2d
data_format=data_format, dilations=dilations, name=name)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py”, line 788, in _apply_op_helper
op_def=op_def)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py”, line 3300, in create_op
op_def=op_def)
File “/data/home/chanjun_park/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py”, line 1801, in init
self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[769,1,13,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node transformer/decoder_1/layer_1/multi_head/conv1d_1/conv1d/Conv2D (defined at /data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/layers/transformer.py:136) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[node optim/cond/Merge (defined at /data/home/chanjun_park/.local/lib/python3.5/site-packages/opennmt/utils/optim.py:256) ]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

See:

You have to tune the configuration for the training to fit on your GPU.