Error when building CTranslate

I’m getting an error when building CTranslate. I was able to point it to my Eigen library, and I installed Boost via:
$ sudo apt-get install libboost-all-dev

But I’m getting this build error:

devinbost@DevinsNeuralNet:~/src/CTranslate/build$ cmake … -DEIGEN_ROOT=’/home/devinbost/Downloads/eigen-eigen-5a0156e40feb/’
– Build type: Release
– Boost version: 1.58.0
– Found the following Boost libraries:
– program_options
– Found Eigen3: /home/devinbost/Downloads/eigen-eigen-5a0156e40feb (Required is at least version “3.3”)
– Looking for pthread.h
– Looking for pthread.h - found
– Looking for pthread_create
– Looking for pthread_create - not found
– Looking for pthread_create in pthreads
– Looking for pthread_create in pthreads - not found
– Looking for pthread_create in pthread
– Looking for pthread_create in pthread - found
– Found Threads: TRUE
– Found CUDA: /usr/local/cuda (found suitable version “8.0”, minimum required is “6.5”)
– Boost version: 1.58.0
– Found the following Boost libraries:
– program_options
– Configuring done
– Generating done
– Build files have been written to: /home/devinbost/src/CTranslate/build
devinbost@DevinsNeuralNet:~/src/CTranslate/build$ make
Scanning dependencies of target OpenNMTTokenizer
[ 3%] Building CXX object lib/tokenizer/CMakeFiles/OpenNMTTokenizer.dir/src/
[ 6%] Building CXX object lib/tokenizer/CMakeFiles/OpenNMTTokenizer.dir/src/
[ 9%] Building CXX object lib/tokenizer/CMakeFiles/OpenNMTTokenizer.dir/src/
[ 12%] Building CXX object lib/tokenizer/CMakeFiles/OpenNMTTokenizer.dir/src/
[ 15%] Building CXX object lib/tokenizer/CMakeFiles/OpenNMTTokenizer.dir/src/
[ 18%] Building CXX object lib/tokenizer/CMakeFiles/OpenNMTTokenizer.dir/src/unicode/
[ 21%] Building CXX object lib/tokenizer/CMakeFiles/OpenNMTTokenizer.dir/src/unicode/
[ 24%] Linking CXX shared library
[ 24%] Built target OpenNMTTokenizer
Scanning dependencies of target TH
[ 27%] Building C object lib/TH/CMakeFiles/TH.dir/THGeneral.c.o
[ 30%] Building C object lib/TH/CMakeFiles/TH.dir/THFile.c.o
[ 33%] Building C object lib/TH/CMakeFiles/TH.dir/THDiskFile.c.o
[ 36%] Linking C shared library
[ 36%] Built target TH
[ 39%] Building NVCC (Device) object CMakeFiles/onmt.dir/src/cuda/
nvcc warning : The ‘compute_20’, ‘sm_20’, and ‘sm_21’ architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
/home/devinbost/src/CTranslate/src/cuda/ fatal error: onmt/cuda/Kernels.cuh: No such file or directory
compilation terminated.
CMake Error at (message):
Error generating
CMakeFiles/onmt.dir/build.make:63: recipe for target ‘CMakeFiles/onmt.dir/src/cuda/’ failed
make[2]: *** [CMakeFiles/onmt.dir/src/cuda/] Error 1
CMakeFiles/Makefile2:68: recipe for target ‘CMakeFiles/onmt.dir/all’ failed
make[1]: *** [CMakeFiles/onmt.dir/all] Error 2
Makefile:127: recipe for target ‘all’ failed
make: *** [all] Error 2

Any ideas?

Could you try adding:


just above line 73 in the CMakeLists.txt file and rebuild.

1 Like

That works! Thanks!

My understanding of compiling code in C/C++ on Linux is limited. If I want to also link to Intel MKL, I read in the instructions provided here to use EIGEN_USE_MKL_ALL . Do I also put that into the CMakeLists.txt file? Or somewhere else?

Also, I read that I can use the Intel MKL Linking Advisor, but I’m not sure how to answer all of its questions. Where do I find out if we’re using static or dynamic linking? And what about the cluster libraries?

Sorry if these questions indicate a substantial gap of knowledge in this subject. If anyone has suggestions on reading or study material so that I can better learn this subject, I’d appreciate that also.

That is actually a bit strange. What is your nvcc --version?

Yes, with add_definitions(-DEIGEN_USE_MKL_ALL).

You probably want to use dynamic linking and no cluster libraries. Then you can directly extend the CMAKE_CXX_FLAGS variable with the given command line or use CMake primitives.

1 Like

I added an Intel MKL finder in the CMakeLists.txt. It should discover and link automatically to it.

Let me know how it works for you.

1 Like

You are awesome!

Interestingly, when I ran:
$ nvcc --version

I got:

“The program ‘nvcc’ is currently not installed. You can install it by typing:
sudo apt install nvidia-cuda-toolkit”

I know CUDA is installed because otherwise I wouldn’t be able to perform GPU training at all. So, is this another library that needs to be installed in addition to CUDA? Or could there be some type of issue like where I need to add a directory to my PATH?

I’ll check out the Intel MKL finder once I have a moment to do that.
Have you determined how significant of a performance impact Intel MKL makes?

Most likely. It is usually in /usr/local/cuda-8.0/bin/.

I should renew my experiments. Last time I did them, gains were small but consistent. Eigen backend is already well optimized but it is worth using Intel MKL when cross compiling and when using -march=native is not an option.

1 Like

Sure enough, that directory was not part of my PATH.
So, I added it to my ~/.profile file.
I then ran:
$ source ~/.profile
and now when I run:
$ nvcc --version
I get:

nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61