Here’s a quick recap of the two projects we built at Junction Tokyo.


MLT x 2020

We wanted to build a Deep Learning system that serves the Tokyo Olympics 2020 and help facilitate a safe, fun and incredible experience for all people involved – for participants from Japan and all over the world and the Olympics organizing committee, their staff and their Business Unit .

More than half a million people are expected to come to Tokyo for the Olympics 2020. We built a highly scalable system for face detection and count, age, gender and emotion prediction for (1) managing crowds, (2) making personalized recommendation for users and (3) help with marketing campaigns (e.g. ads on screens based on average age or gender). We combined 3 Deep Learning Models and 3 APIs (Twitter, Google Maps, Google Translate). This helps to facilitate Safety, Efficiency and Business Value for the Tokyo Olympics 2020.

We used OpenCV Deep Learning (and alternatively Tiny Faces) and Tensorflow and Keras for the face detection, count, age, gender and emotion estimation. We implemented 3 different APIs for personalized user recommendation (e.g. waiting time in lines, restaurants, ..). We deployed with Flask.


Github repo:



History is important. Through history, we are able to learn lessons from our ancestors and understand how the world came to be what it is today.

One important event in Japanese history was the Meiji restoration, when Japanese leaders standardized the Hiragana writing system we all know today. Before this, people wrote books using a cursive script called Kuzushiji. Today, most Japanese natives cannot read Kuzushiji. This means there are over a thousand years’ worth of books (~3 million unregistered books and a billion historical documents) that are inaccessible to the general public.

Our solution is a web application serving a Kuzushiji Optical Character Recognition (OCR) system.

The web application lets users upload images or take pictures, and detects the location of each character and classifies them.

We used data from that contains high quality pictures of manuscripts written in Kuzushiji with bounding boxes and a classification label for each character. The model was built from scratch and consists of two components: a UNET detecting the center of each character, and an image classifier predicting the label of each detected character.

Screen Shot 0031-02-18 at 22.47.19

Github repo:

Leave a Reply