Unable to create ctranslate2 translator in AWS Lambda

Currently I am unable to run the Ctranslate2 translator in AWS Lamda, i am positive the model file exists as well that ctranslate2 has been installed correctly. I am attempting to run it with cpu, it’s a quantized int8 model and everything function wise works right until I try and initiate the model, in which the following error is thrown:

OMP: Error #179: Function Can't open SHM2 failed:
OMP: System error #38: Function not implemented

My code is to initiate it is pretty simple but it throws an OMP error. I have tried loading models with huggingface and it functions just fine, it seems to be specifically ctranslate2.

model = ctranslate2.Translator("./model", device="cpu", compute_type="auto")

I searched around and found it may be something to do with multithreading but then AWS Lambda says they support it fine. Please advise what to do, if anything can be done.

As far as I understand, AWS Lambdas do not provide the /dev/shm directory which seems to be used by OpenMP (OMP).

I did not find an easy workaround at this time. I would suggest using EC2 instances which are more suited for this application in terms of features and performance.

Cool, will definitely do so, thanks.

Following up regarding EC2, does ctranslate2 currently support ARM processors for python? And if so, would I just install via. pypi or would I need to build from source with oneDNN.

Thanks!

ARM processors are supported and we released binary Python wheels on PyPI. So you can just install with pip install ctranslate2.

1 Like

For reference this was fixed in version 2.24.0. See this issue for more details: