This release comes with several new features:
- The training code is now more generic and part of the library so you can expect more models to be trained with the OpenNMT framework in the future
- New scripts to conveniently extract word embeddings and generate word vocabularies
- New way to prune vocabularies by minimum frequency instead of absolute size
- New REST translation server
- Experimental FP16 support with the latest cutorch version; if you have compatible hardware, we would love to hear about your experience using it!
- Experimental data sampling techniques: select a subset of the training data at each epoch to converge faster (documentation to be added).
As always, it also comes with several bug fixes and improvements thanks to the community reports and feedback.