DesktopTranslator: Windows GUI Excusable based on CTranslate2

Dear Muhammad,

Models need to be in the CTranslate2 format.

As for the French-to-English model, you can find a recent version in the CTranslate2 format here.

As for the English-to-Arabic model, this was an experimental model trained on about 400k segments from MS Terminology. It used RNN-LSTM, not the Transformer model, so it cannot be converted to the CTranslate2 format.

For training an English-to-Arabic model, I would recommend using enough data from OPUS (maybe, avoid crawled corpora), and applying the Transformer model. I am working on a new English-to-Arabic model, and I can publish it once it is finished.


Domain Adaptation

For Domain Adaptation, i.e. to create specialized models, one needs to have a good baseline model trained on enough (general) data, and then fine-tune it on in-domain data. This is because usually in-domain data is less, and might not be enough to train a strong model from scratch. There are multiple ways for Domain Adaptation. For example, I explained Mixed Fine-tuning (Chu et al., 2017) in this blog.


Pre-trained Models

Nowadays, you can find a lot of pre-trained models. Obviously, not all of them of good quality, but you can try.

  • M2M-100 model supports 100 languages, including Arabic. You can find a CTranslate2 version of it that you can use in DesktopTranslator here.
  • Argos Translate models: Argos Translate is another good tool. It also supports CTranslate2 models. So you can download the model you want from the list of models. Then, change the extension to zip and extract it. You will find the CTranslate2 model and SentencePiece model, that you can use in DesktopTranslator as well.
  • Hugging Face models. However, most likely one should use them with the transformers library.

I hope this helps. If you have more questions, please let me know.

Kind regards,
Yasmin

You may also be interested in the latest CTranslate2 version which added a converter for the 1000+ pretrained models from OPUS-MT. See the “Marian” example in the quickstart.

2 Likes

This is great news. Thanks a lot, Guillaume! I see you also added support for mBART.

This is good and timely news for me. Thanks :slight_smile:

Thank you for your wonderful work!

I have a GPU and use your DesktopTranslator on Windows10. I want to use ctranslate2 with GPU, so I change your code as follows:

self.translator = ctranslate2.Translator(
self.model_dir,
device=“gpu”
)

It doesn’t work.

Does ctranslate2 support GPU on windows?

Thanks!

Dear Liu!

Please try device="cuda"

Kind regards,
Yasmin

1 Like

Thank you, Yasmin!

I tried device=‘cuda’, and the program didn’t work and returned the following error:

Warning : load_model does not return WordVectorModel or SupervisedModel any more, but a FastText object which is very similar.
Exception in Tkinter callback
Traceback (most recent call last):
File “D:\Python\Python38\lib\tkinter_init_.py”, line 1883, in call
return self.func(*args)
File “D:/kidden/mt/open/DesktopTranslator/translator.py”, line 479, in translate_input
translations_tok = self.translator.translate_batch(
RuntimeError: Library cublas64_11.dll is not found or cannot be loaded

Kind regards,
Liu Xiaofeng

Dear Liu,

Does the app work well with “cpu”? If so, could you please try to fix the “cuda” issue independently first.

If you run the following code in Python, what do you get? Replace "ctranslate2_model" with the path to a CTranslate2 model. Please try the code once with device="cpu" and once with device="cuda"

import ctranslate2

translator = ctranslate2.Translator("ctranslate2_model", device="cuda")
batch = [["▁H", "ello", "▁world", "!"]]
translator.translate_batch(batch)

Kind regards,
Yasmin

Hi, Yasmin

Yes, I can run the app with ‘cpu’.

The code can be run with ‘cpu’, and failed with ‘cuda’. The run error is as follows:

Traceback (most recent call last):
File “D:/kidden/mt/open/mt-ex/temp/test_ct2.py”, line 5, in
translator.translate_batch(batch)
RuntimeError: Library cublas64_11.dll is not found or cannot be loaded

I run it on Windows10 with a GPU. My GPU settings have no problem because the CTranslate2 model was trained and converted on it.

Can you run it on Windows with device=‘cuda’?

Thanks!

Thanks! Kindly check this issue. I am adding @guillaumekln for more insights.

1 Like

The CUDA toolkit should be installed on the system in order to use the GPU:

Any CUDA version >= 11.2 should work.

1 Like

Thank you, Yasmin and Guillaume!

Indeed it is because of the version of CUDA. I installed CUDA 10.1 for my GPU.

By the way, this forum is so great and I learned a lot from it.

1 Like

Do you happen to have the WMT19 EN-ZH and ZH-EN scores ?

I am curious to see if the M2M models have the same issue as the NLLB200 on those CJK languages.

Thanks

Code update:

Currently, it should be out_type=str, i.e. without quotes, or use sp.encode_as_pieces() instead.

The up-to-date version is always here:

Hi Vincent!

Here you go, English-to-Chinese results on the TICO-19 dataset:

M2M-100 1.2B:
spBLEU: 28.07
ChrF++: 36.38
TER: 101.31
COMET: 52.22

NLLB-200 1.2B:
spBLEU: 29.02
ChrF++: 37.45
TER: 110.22
COMET: 50.05

NLLB-200 3.3B:
spBLEU: 31.35
ChrF++: 39.08
TER: 109.52
COMET: 53.89

Kind regards,
Yasmin

which way is it? (reminder: for NLLB they took ZH-EN to score EN-ZH, test sets are different)

English-to-Chinese. Results on the TICO-19 dataset, to be comparable to results here. I edited the previous reply to make it clearer.

ok my question was on the WMT19 test set (I fear Tico19 is the same in both ways which is not really good to compare).

Hi, Yasmin
Thank you so much for writing this program! It has helped me a lot!
I have a few questions. I am running this program on Windows, using the M2M100 1.2b model, and using CPU for translation. The translation speed is about 10 split/s. How can I improve this speed? Is enhancing the single-core performance of the CPU effective?
Also, sometimes when translating paragraphs over 600 words, there is no translation output, and the software becomes unresponsive during the translation process (mostly when the progress is around 70%). Is this caused by the weak performance of the CPU?
Thank you!

1 Like

Hello! The M2M100 1.2b model is really heavy, and the quality depends on the language. Instead, you can use a bilingual model from OPUS; the speed should be better. OPUS multilingual models will not work (without code change) as they require adding language tags. You can download OPUS models at:

Important: You must convert an OPUS model to the CTranslate2 format first. Example command:

ct2-opus-mt-converter --model_dir opus_model_dir --output_dir ct2_model_dir --quantization int8

I hope this helps.

All the best,
Yasmin