Hi all,
My goal is to translate some long sentences but it seems there is a limit that I can’t get through. So:
- Is
maximum_decoding_lengthin OpenNMT-tf’s parameters applied during training or inference? - Does the
nulldefault value formaximum_features_lengthandmaximum_labels_lengthmeans that no filtering is done at all in the training set so very long sentences are indeed used during training?
Also, do all of the above three parameters refer to the number of tokens (words/subwords)?
I have trained a model without tampering these values and I’m trying to translate a few moderately long sentences but I can’t pass a limit, no matter how high I set max_decoding_length for inference (I’m using CTranslate2). My training set contains some much longer sentences than the ones I’m trying to translate.