Type of default compute mode

function ctranslate2.Translator has “default” computer mode. Question is: which type of “default” & “auto” modes from “int8”, “int8_float16”, “int16”, “float16”, or “float”


“default” is what you used while converting the model, if possible. More on this can be found here.

1 Like

As i understand default type is type on saved model. We r train model on OpenMNT-tf, you can preditct my next question. Where i set a type of my layers when i train my model.

We are not talking here about the OpenNMT-tf model; we are talking about the CTranslate2 model, i.e. after conversion. So you train the model as usual with OpenNMT-tf, and then convert the OpenNMT-tf model to the CTranslate2 format, adding the flag --quantization int8 for example to your conversion command. The link in the previous reply has more details.

Alternatively, you can save the model to the CTranslate2 format during training. See the parameters export_on_best and export_format here.

1 Like

Your right. But question about: what format to be on default quantization mode in convertion to ct2 (--quantization default).

For your undestand i now decide to leave default mode or use specific mode like --quantization int8 in my scripts. may be quantization default more usefull. you know


There is no such thing. Formats for converting the model are only int8, int8_float16, int16, float16. On the other hand, “default” is one of the options for loading the already converted model. You can read the sections “When converting the model” and “When loading the model” to see the difference. Check also this Python API description to learn more how to use compute_type.

Also, you can try the following on your machine:

import ctranslate2
print(ctranslate2.get_supported_compute_types("cpu", 0))
print(ctranslate2.get_supported_compute_types("cuda", 0))  # i.e. gpu

The printed values are the supported values you can use during conversion on your machine.

I hope this helps.

Kind regards,

This is the way I do it - it’s very straightforward and works every time :slight_smile: