Server issue with apache but not with mod-wsgi_express

Hi,
I’m trying to run the onmt_server behind apache2. However, it does not load the models on the GPU, although apache2 correctly communicates with the WSGI script. This is the apache error log:

[Thu Nov 19 12:02:46.084823 2020] [wsgi:error] [pid 9:tid 140573004527360] Pre-loading model 100
[Thu Nov 19 12:02:46.089834 2020] [wsgi:error] [pid 9:tid 140573004527360] [2020-11-19 12:02:46,089 INFO] Loading tokenizer

In the other hand, if I run onmt_server behind mod-wsgi_express, it works and models are loaded in the GPU:

Pre-loading model 100
[2020-11-19 12:05:24,009 INFO] Loading tokenizer
[2020-11-19 12:05:24,074 INFO] Loading model 100
Performed a dummy translation to initialize the model ([[-2.4049062728881836]], [['a']])
Serving on http://0.0.0.0:5000

It seems there is a bug with Apache when running the server…

This is the WSGI script:

import sys
import socket
import os
sys.path.insert(0, '/usr/local/lib/python3.6/dist-packages/onmt/bin/')
from server import start
application=start("/onmt/config/onmt_models.config", url_root="/translator")

Any clues?