DesktopTranslator: Windows GUI Excusable based on CTranslate2

Dear Liu!

Please try device="cuda"

Kind regards,
Yasmin

1 Like

Thank you, Yasmin!

I tried device=‘cuda’, and the program didn’t work and returned the following error:

Warning : load_model does not return WordVectorModel or SupervisedModel any more, but a FastText object which is very similar.
Exception in Tkinter callback
Traceback (most recent call last):
File “D:\Python\Python38\lib\tkinter_init_.py”, line 1883, in call
return self.func(*args)
File “D:/kidden/mt/open/DesktopTranslator/translator.py”, line 479, in translate_input
translations_tok = self.translator.translate_batch(
RuntimeError: Library cublas64_11.dll is not found or cannot be loaded

Kind regards,
Liu Xiaofeng

Dear Liu,

Does the app work well with “cpu”? If so, could you please try to fix the “cuda” issue independently first.

If you run the following code in Python, what do you get? Replace "ctranslate2_model" with the path to a CTranslate2 model. Please try the code once with device="cpu" and once with device="cuda"

import ctranslate2

translator = ctranslate2.Translator("ctranslate2_model", device="cuda")
batch = [["▁H", "ello", "▁world", "!"]]
translator.translate_batch(batch)

Kind regards,
Yasmin

Hi, Yasmin

Yes, I can run the app with ‘cpu’.

The code can be run with ‘cpu’, and failed with ‘cuda’. The run error is as follows:

Traceback (most recent call last):
File “D:/kidden/mt/open/mt-ex/temp/test_ct2.py”, line 5, in
translator.translate_batch(batch)
RuntimeError: Library cublas64_11.dll is not found or cannot be loaded

I run it on Windows10 with a GPU. My GPU settings have no problem because the CTranslate2 model was trained and converted on it.

Can you run it on Windows with device=‘cuda’?

Thanks!

Thanks! Kindly check this issue. I am adding @guillaumekln for more insights.

1 Like

The CUDA toolkit should be installed on the system in order to use the GPU:

Any CUDA version >= 11.2 should work.

1 Like

Thank you, Yasmin and Guillaume!

Indeed it is because of the version of CUDA. I installed CUDA 10.1 for my GPU.

By the way, this forum is so great and I learned a lot from it.

1 Like

Do you happen to have the WMT19 EN-ZH and ZH-EN scores ?

I am curious to see if the M2M models have the same issue as the NLLB200 on those CJK languages.

Thanks

Code update:

Currently, it should be out_type=str, i.e. without quotes, or use sp.encode_as_pieces() instead.

The up-to-date version is always here:

Hi Vincent!

Here you go, English-to-Chinese results on the TICO-19 dataset:

M2M-100 1.2B:
spBLEU: 28.07
ChrF++: 36.38
TER: 101.31
COMET: 52.22

NLLB-200 1.2B:
spBLEU: 29.02
ChrF++: 37.45
TER: 110.22
COMET: 50.05

NLLB-200 3.3B:
spBLEU: 31.35
ChrF++: 39.08
TER: 109.52
COMET: 53.89

Kind regards,
Yasmin

which way is it? (reminder: for NLLB they took ZH-EN to score EN-ZH, test sets are different)

English-to-Chinese. Results on the TICO-19 dataset, to be comparable to results here. I edited the previous reply to make it clearer.

ok my question was on the WMT19 test set (I fear Tico19 is the same in both ways which is not really good to compare).

Hi, Yasmin
Thank you so much for writing this program! It has helped me a lot!
I have a few questions. I am running this program on Windows, using the M2M100 1.2b model, and using CPU for translation. The translation speed is about 10 split/s. How can I improve this speed? Is enhancing the single-core performance of the CPU effective?
Also, sometimes when translating paragraphs over 600 words, there is no translation output, and the software becomes unresponsive during the translation process (mostly when the progress is around 70%). Is this caused by the weak performance of the CPU?
Thank you!

1 Like

Hello! The M2M100 1.2b model is really heavy, and the quality depends on the language. Instead, you can use a bilingual model from OPUS; the speed should be better. OPUS multilingual models will not work (without code change) as they require adding language tags. You can download OPUS models at:

Important: You must convert an OPUS model to the CTranslate2 format first. Example command:

ct2-opus-mt-converter --model_dir opus_model_dir --output_dir ct2_model_dir --quantization int8

I hope this helps.

All the best,
Yasmin