I was wondering if there is some way we can apply Adaptive learning in Open NMT so that we can improve and adapt our model as people start using it.?
Hi,
Can you be more specific about what your mean by adaptive learning? Is it to retrain the model with revised or new translations coming from users?
Well, let me put my case here - I was also wondering about it.
I want to use the already trained out-domain model, and continue the training on the small in-domain data.
- the in-domain data could be used as both training and validation set, or just used as validation set.
I think we’re loading “adaptive” here…
@Jean is talking about something akin to “domain adaptation” which several people (myself included) have had success with.
I think @adroit9153 is talking about using user feedback (post-edits) to update the model in (near) real-time. You certainly could do that one post-edit at a time, but I’d be tempted to batch them - wait until you have at least 1000 before updating your model, unless your model is really underperforming on your dataset.
Hi @guillaumekln , @dbl
Yes, I am talking about the postedits (user feedback ) to modify the model and make it more accurate for a particular product.
I have considered to retrain the model after some batch of strings. But somehow I think there should be a better way to provide learning and correction to the user instantly.
Let me know your thoughts.
There’s nothing to stop you from updating your model with each post-edited segment, I just don’t know how efficient it would be. If you’re using one of the server options, you’d need to restart it with your updated model after every translation… Doable, but not very practical in a production environment, in my opinion.
Thank you, I found it!
Actually, “adaptive” is also sometimes used to refer to the interactive MT offered by Lilt and SDL Trados Studio 2017, which is a kind of predictive typing powered by SMT - the system will initially offer an MT translation for a sentence, but the user can accept only a part of the suggestion and make manual changes, and the system will suggest a new translation based on what has been accepted so far.
Yes, that’s true. Adaptive in the interactive sense, but that didn’t sound like what the OP had in mind, unless I’m mistaken.
Do Lilt or SDL offer this with neural engines? I had thought they were only doing so with SMT. Not that it would be impossible, just a very different implementation.
I think this would be easy to implement for someone comfortable with ONMT code:
I would be interested in ways to achieve such a predictive translation server. Can someone already familiar with ONMT code provide first hints in the existing translate implementation?