Hi all,
My goal is to translate some long sentences but it seems there is a limit that I can’t get through. So:
- Is
maximum_decoding_length
in OpenNMT-tf’s parameters applied during training or inference? - Does the
null
default value formaximum_features_length
andmaximum_labels_length
means that no filtering is done at all in the training set so very long sentences are indeed used during training?
Also, do all of the above three parameters refer to the number of tokens (words/subwords)?
I have trained a model without tampering these values and I’m trying to translate a few moderately long sentences but I can’t pass a limit, no matter how high I set max_decoding_length
for inference (I’m using CTranslate2). My training set contains some much longer sentences than the ones I’m trying to translate.