How to check if onmt-main is using my GPU

opennmt-tf

(Voja K) #1

I use default configs.

Here are the logs:
2018-05-22 09:16:30.642110: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: FMA
INFO:tensorflow:Using config: {’_save_checkpoints_secs’: None, ‘_session_config’: gpu_options {
}
allow_soft_placement: true
, ‘_keep_checkpoint_max’: 5, ‘_task_type’: ‘worker’, ‘_train_distribute’: None, ‘_is_chief’: True, ‘_cluster_spec’: <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f4d3e56b590>, ‘_evaluation_master’: ‘’, ‘_save_checkpoints_steps’: 5000, ‘_keep_checkpoint_every_n_hours’: 10000, ‘_service’: None, ‘_num_ps_replicas’: 0, ‘_tf_random_seed’: None, ‘_master’: ‘’, ‘_num_worker_replicas’: 1, ‘_task_id’: 0, ‘_log_step_count_steps’: 50, ‘_model_dir’: ‘en_cs50k/model’, ‘_global_id_in_cluster’: 0, ‘_save_summary_steps’: 50}
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after 18000 secs (eval_spec.throttle_secs) or training is finished.
INFO:tensorflow:Calling model_fn.


(Guillaume Klein) #2

You should see logs related to detected GPU devices. Did you install the package tensorflow instead of tensorflow-gpu?


(Voja K) #3

So, we are sure this means it’s not using GPU?
I have both tensorflow and tensorflow-gpu installed.
I tried both pip install OpenNMT-tf and running bin/main.
Both give the same output.


(Guillaume Klein) #4

Well, you can also run nvidia-smi to check if the GPU is being used. Make sure to only install tensorflow-gpu.


(Guillaume Klein) #5

A post was split to a new topic: GPU is not used during training