As always, a new OpenNMT release means lots of new features to experiment!
New validation metrics
You can now choose to compute other scores on the validation data with the
- perplexity (current default)
- Damerau-Levenshtein edit ratio (thanks to @dbl!)
The BLEU and D.-L. scores run an actual translation with beam search whose options are now available during training. These metrics are also available in a standalone
The learning rate decay that used the validation perplexity now more generally uses the validation score (see the renamed options in the changelog).
Google's NMT encoder
You can now use the encoder as described in Wu et al. 2016 (section 3.2) in your experiments with
-encoder_type gnmt. It is a simple encoder with the first layer being bidirectional.
Improved pyramidal encoder
The pyramidal encoder now reduces the time dimension with a concatenation (as in Chan et al. 2015) instead of sum. You can select one or the other with the
-pdbrnn_merge option. Relative to this change, models previously trained with
-dbrnn are no longer compatible and should be retrained.
Also a bug that led to incorrect gradients in bidirectional layers when using variable lengths sequences was fixed. Experiments using this configuration should ideally be renewed.
Thanks to @vince62s, the script
tools/average_models.lua can be used to average the parameters of multiple models as described in Junczys-Dowmunt et al. 2016.
Beam search visualization
As beam search is often difficult to interpret a new option and a tool are available for visualization. See the documentation.
Further support of language models
Language models can finally be used for sampling or scoring. Take a look at the
New tokenization options
Some features were also added to the tokenization:
- split words on case change (thanks to @kovalevfm!)
- split words on alphabet change
See the tokenization options for more details.
Thanks to contributors, bug reporters, and people testing and giving feedback. If you find a bug introduced in this release, please report it.