Keith Stevens (Google Japan) gave an insightful talk about Neural Machine Translation at Google, from Sequence to Sequence Models to Transformers and Hybrids, and an excellent overview of the latest research.
He also presented a sample of unsolved problems:
- Pushing the limits of model size
- Working with non-ideal data
- Translating on the fly
- Using models to find data
Keith Stevens has been on Google Translate for nearly 7 years. His primary focus has been finding or creating the best data possible for improving Google Translate. This work has ranged from Crowdsourcing systems to neural based data cleaning techniques and automated parallel data mining.
This event was co-organized by Le Wagon Tokyo and Machine Learning Tokyo.