Hi! I am trying to tokenize at special characters using OpenNMT-py… I know I can do that using SentencePiece. But i don’t know how to use sentence piece on pre-existing models in openNMT-py. I have a pre-trained model and i want to use sentence piece on that. Can anyone help me with that?
Are you translating files or running the REST server?