Hi, I wonder how to add POS Tag and train with OpenNMT-tf. Can someone help me ?
Hi,
Are looking to generate the POS tags or configure OpenNMT-tf to augment the training data with POS tags?
Hi @guillaumekln, in fact I want to configure OpenNMT-tf to augment the training data with POS tags. (I use this tool for generating POS tag
https://github.com/stanfordnlp/stanza/ )
by the way, can we generate the POS tags with OpenNMT-tf?
Thank you.
You can get started by reading about parallel inputs and looking at a model definition with multiple input features.
Not without first training a sequence tagging model.
Thank you for your information, I tried but I got an error: could you help me.
2020-04-22 17:14:48.832196: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library ‘libnvinfer.so.6’; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2020-04-22 17:14:48.832299: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library ‘libnvinfer_plugin.so.6’; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2020-04-22 17:14:48.832318: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2020-04-22 17:14:49.704528: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-04-22 17:14:49.726839: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-22 17:14:49.727413: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:00:04.0 name: Tesla P4 computeCapability: 6.1
coreClock: 1.1135GHz coreCount: 20 deviceMemorySize: 7.43GiB deviceMemoryBandwidth: 178.99GiB/s
2020-04-22 17:14:49.727685: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-04-22 17:14:49.729281: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-04-22 17:14:49.731036: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-04-22 17:14:49.731356: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-04-22 17:14:49.733023: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-04-22 17:14:49.734038: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-04-22 17:14:49.737702: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-22 17:14:49.737801: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-22 17:14:49.738378: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-22 17:14:49.738825: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
INFO:tensorflow:Using parameters:
data:
eval_features_file: tst2012.en.txt
eval_labels_file: tst2012.vi.txt
source_vocabulary: src-vocab.txt
target_vocabulary: tgt-vocab.txt
train_features_file:
- train.en
- train_pos.en
train_labels_file: train.vi
eval:
batch_size: 32
infer:
batch_size: 32
length_bucket_width: 5
model_dir: run/
params:
average_loss_in_time: true
beam_width: 4
decay_params:
model_dim: 512
warmup_steps: 8000
decay_type: NoamDecay
label_smoothing: 0.1
learning_rate: 2.0
num_hypotheses: 1
optimizer: LazyAdam
optimizer_params:
beta_1: 0.9
beta_2: 0.998
score:
batch_size: 64
train:
average_last_checkpoints: 8
batch_size: 3072
batch_type: tokens
effective_batch_size: 25000
keep_checkpoint_max: 8
length_bucket_width: 1
max_step: 500000
maximum_features_length: 100
maximum_labels_length: 100
sample_buffer_size: -1
save_summary_steps: 100
2020-04-22 17:15:00.249756: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2020-04-22 17:15:00.254501: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2000165000 Hz
2020-04-22 17:15:00.254757: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1778f40 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-22 17:15:00.254792: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-04-22 17:15:00.379861: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-22 17:15:00.380394: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1779b80 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-04-22 17:15:00.380423: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P4, Compute Capability 6.1
2020-04-22 17:15:00.381648: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-22 17:15:00.382019: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:00:04.0 name: Tesla P4 computeCapability: 6.1
coreClock: 1.1135GHz coreCount: 20 deviceMemorySize: 7.43GiB deviceMemoryBandwidth: 178.99GiB/s
2020-04-22 17:15:00.382075: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-04-22 17:15:00.382101: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-04-22 17:15:00.382123: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-04-22 17:15:00.382145: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-04-22 17:15:00.382164: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-04-22 17:15:00.382181: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-04-22 17:15:00.382199: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-22 17:15:00.382265: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-22 17:15:00.382714: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-22 17:15:00.383063: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-22 17:15:00.383124: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-04-22 17:15:00.387992: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-22 17:15:00.388022: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-04-22 17:15:00.388033: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-04-22 17:15:00.388153: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-22 17:15:00.388549: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-22 17:15:00.388909: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-04-22 17:15:00.388963: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7123 MB memory) -> physical GPU (device: 0, name: Tesla P4, pci bus id: 0000:00:04.0, compute capability: 6.1)
WARNING:tensorflow:No checkpoint to restore in run/
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/summary/summary_iterator.py:68: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
`tf.data.TFRecordDataset(path)`
INFO:tensorflow:Accumulate gradients of 9 iterations to reach effective batch size of 25000
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
Traceback (most recent call last):
File "/usr/local/bin/onmt-main", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.6/dist-packages/opennmt/bin/main.py", line 223, in main
hvd=hvd)
File "/usr/local/lib/python3.6/dist-packages/opennmt/runner.py", line 217, in train
moving_average_decay=train_config.get("moving_average_decay"))
File "/usr/local/lib/python3.6/dist-packages/opennmt/training.py", line 90, in __call__
for loss in self._steps(dataset, accum_steps=accum_steps, report_steps=report_steps):
File "/usr/local/lib/python3.6/dist-packages/opennmt/training.py", line 357, in _steps
for i, loss in enumerate(self._accumulate_next_gradients(dataset, report_steps=report_steps)):
File "/usr/local/lib/python3.6/dist-packages/opennmt/training.py", line 376, in _accumulate_next_gradients
iterator = iter(distributed_dataset)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/distribute/input_lib.py", line 677, in __iter__
self._input_contexts, self._input_workers, self._dataset_fn)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/distribute/input_lib.py", line 1024, in _create_iterators_per_worker_with_input_context
dataset = dataset_fn(ctx)
File "/usr/local/lib/python3.6/dist-packages/opennmt/runner.py", line 175, in <lambda>
weights=data_config.get("train_files_weights"))
File "/usr/local/lib/python3.6/dist-packages/opennmt/inputters/inputter.py", line 559, in make_training_dataset
dataset = self.make_dataset([features_file, labels_file], training=True)
File "/usr/local/lib/python3.6/dist-packages/opennmt/models/sequence_to_sequence.py", line 431, in make_dataset
data_file, training=training)
File "/usr/local/lib/python3.6/dist-packages/opennmt/inputters/inputter.py", line 279, in make_dataset
num_files, len(dataset), i))
ValueError: All parallel inputs must have the same number of data files, saw 2 files for input 0 but got 1 files for input 1
What is your model definition?
This is my custom model, please have a look
import tensorflow as tf
import opennmt as onmt
def model():
return onmt.models.Transformer(
source_inputter=onmt.inputters.ParallelInputter([
# onmt.inputters.WordEmbedder(embedding_size=512),
onmt.inputters.WordEmbedder(embedding_size=64)],
reducer=onmt.layers.ConcatReducer()),
target_inputter=onmt.inputters.WordEmbedder(embedding_size=512),
num_layers=6,
num_units=512,
num_heads=8,
ffn_inner_dim=2048,
dropout=0.1,
attention_dropout=0.1,
ffn_dropout=0.1)
If you want your model to have 2 inputs (words and POS), you need to uncomment the first inputter.